Dimitri Sivanich [Tue, 16 Oct 2012 12:50:21 +0000 (07:50 -0500)]
x86/irq/ioapic: Check for valid irq_cfg pointer in smp_irq_move_cleanup_interrupt
Posting this patch to fix an issue concerning sparse irq's that
I raised a while back. There was discussion about adding
refcounting to sparse irqs (to fix other potential race
conditions), but that does not appear to have been addressed
yet. This covers the only issue of this type that I've
encountered in this area.
A NULL pointer dereference can occur in
smp_irq_move_cleanup_interrupt() if we haven't yet setup the
irq_cfg pointer in the irq_desc.irq_data.chip_data.
In create_irq_nr() there is a window where we have set
vector_irq in __assign_irq_vector(), but not yet called
irq_set_chip_data() to set the irq_cfg pointer.
Should an IRQ_MOVE_CLEANUP_VECTOR hit the cpu in question during
this time, smp_irq_move_cleanup_interrupt() will attempt to
process the aforementioned irq, but panic when accessing
irq_cfg.
Only continue processing the irq if irq_cfg is non-NULL.
Matt Fleming [Fri, 19 Oct 2012 12:25:46 +0000 (13:25 +0100)]
x86/efi: Fix oops caused by incorrect set_memory_uc() usage
Calling __pa() with an ioremap'd address is invalid. If we
encounter an efi_memory_desc_t without EFI_MEMORY_WB set in
->attribute we currently call set_memory_uc(), which in turn
calls __pa() on a potentially ioremap'd address.
On CONFIG_X86_32 this results in the following oops:
The only time we can call set_memory_uc() for a memory region is
when it is part of the direct kernel mapping. For the case where
we ioremap a memory region we must leave it alone.
This patch reimplements the fix from e8c7106280a3 ("x86, efi:
Calling __pa() with an ioremap()ed address is invalid") which
was reverted in e1ad783b12ec because it caused a regression on
some MacBooks (they hung at boot). The regression was caused
because the commit only marked EFI_RUNTIME_SERVICES_DATA as
E820_RESERVED_EFI, when it should have marked all regions that
have the EFI_MEMORY_RUNTIME attribute.
Despite first impressions, it's not possible to use
ioremap_cache() to map all cached memory regions on
CONFIG_X86_64 because of the way that the memory map might be
configured as detailed in the following bug report,
Vince Weaver [Wed, 17 Oct 2012 17:05:45 +0000 (13:05 -0400)]
perf/x86: Enable overflow on Intel KNC with a custom knc_pmu_handle_irq()
Although based on the Intel P6 design, the interrupt mechnanism
for KNC more closely resembles the Intel architectural
perfmon one.
We can't just re-use that code though, because KNC has different
MSR numbers for the status and ack registers.
In this case we just cut-and paste from perf_event_intel.c
with some minor changes, as it looks like it would not be
worth the trouble to change that code to be MSR-configurable.
Signed-off-by: Vince Weaver <vincent.weaver@maine.edu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Cc: eranian@gmail.com Cc: Meadows Lawrence F <lawrence.f.meadows@intel.com> Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1210171304410.23243@vincent-weaver-1.um.maine.edu
[ Small stylistic edits. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
Vince Weaver [Wed, 17 Oct 2012 17:04:33 +0000 (13:04 -0400)]
perf/x86: Remove cpuc->enable check on Intl KNC event enable/disable
x86_pmu.enable() is called from x86_pmu_enable() with
cpuc->enabled set to 0. This means we weren't re-enabling the
counters after a context switch.
This patch just removes the check, as it should't be necessary
(and the equivelent x86_ generic code does not have the checks).
The origin of this problem is the KNC driver being based on the
P6 one. The P6 driver also has this issue, but works anyway
due to various lucky accidents.
Signed-off-by: Vince Weaver <vincent.weaver@maine.edu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Cc: eranian@gmail.com Cc: Meadows Cc: Lawrence F <lawrence.f.meadows@intel.com> Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1210171303290.23243@vincent-weaver-1.um.maine.edu Signed-off-by: Ingo Molnar <mingo@kernel.org>
Vince Weaver [Wed, 17 Oct 2012 17:03:21 +0000 (13:03 -0400)]
perf/x86: Make Intel KNC use full 40-bit width of counters
Early versions of Intel KNC chips have a bug where bits above 32
were not properly set. We worked around this by only using the
bottom 32 bits (out of 40 that should be available).
It turns out this workaround breaks overflow handling.
The buggy silicon will in theory never be used in production
systems, so remove this workaround so we get proper overflow
support.
Signed-off-by: Vince Weaver <vincent.weaver@maine.edu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Cc: eranian@gmail.com Cc: Meadows Lawrence F <lawrence.f.meadows@intel.com> Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1210171302140.23243@vincent-weaver-1.um.maine.edu Signed-off-by: Ingo Molnar <mingo@kernel.org>
This, beyond handling corner cases, also fixes some build warnings:
arch/x86/kernel/cpu/perf_event_intel_uncore.c: In function ‘snbep_uncore_pci_disable_box’:
arch/x86/kernel/cpu/perf_event_intel_uncore.c:124:9: warning: ‘config’ is used uninitialized in this function [-Wuninitialized]
arch/x86/kernel/cpu/perf_event_intel_uncore.c: In function ‘snbep_uncore_pci_enable_box’:
arch/x86/kernel/cpu/perf_event_intel_uncore.c:135:9: warning: ‘config’ is used uninitialized in this function [-Wuninitialized]
arch/x86/kernel/cpu/perf_event_intel_uncore.c: In function ‘snbep_uncore_pci_read_counter’:
arch/x86/kernel/cpu/perf_event_intel_uncore.c:164:2: warning: ‘count’ is used uninitialized in this function [-Wuninitialized]
Jan Beulich [Thu, 4 Oct 2012 13:48:10 +0000 (14:48 +0100)]
x86-64: Fix page table accounting
Commit 20167d3421a089a1bf1bd680b150dc69c9506810 ("x86-64: Fix
accounting in kernel_physical_mapping_init()") went a little too
far by entirely removing the counting of pre-populated page
tables: this should be done at boot time (to cover the page
tables set up in early boot code), but shouldn't be done during
memory hot add.
Hence, re-add the removed increments of "pages", but make them
and the one in phys_pte_init() conditional upon !after_bootmem.
Jiri Olsa [Wed, 10 Oct 2012 12:53:18 +0000 (14:53 +0200)]
perf test: Add automated tests for pmu sysfs translated events
Add automated tests for all events found under PMU/events
directory. Tested events are in the 'cpu/event=xxx/u' format,
where 'xxx' is substituted by every event found.
The 'event=xxx' term is translated to the cpu specific term.
We only check that the event is created (not the real config
numbers) and that the modifier is properly set.
Signed-off-by: Jiri Olsa <jolsa@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Cc: Paul Mackerras <paulus@samba.org> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Stephane Eranian <eranian@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1349873598-12583-9-git-send-email-jolsa@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Jiri Olsa [Wed, 10 Oct 2012 12:53:16 +0000 (14:53 +0200)]
perf tools: Fix PMU object alias initialization
The pmu_lookup should return pmus that do not expose the 'events'
group attribute in sysfs. Also it should fail when any other error
during 'events' lookup is hit (pmu_aliases fails).
Signed-off-by: Jiri Olsa <jolsa@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Cc: Paul Mackerras <paulus@samba.org> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1349873598-12583-7-git-send-email-jolsa@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Jiri Olsa [Wed, 10 Oct 2012 12:53:11 +0000 (14:53 +0200)]
perf/x86: Make hardware event translations available in sysfs
Add support to display hardware events translations available
through the sysfs. Add 'events' group attribute under the sysfs
x86 PMU record with attribute/file for each hardware event.
This patch adds only backbone for PMUs to display config under
'events' directory. The specific PMU support itself will come
in next patches, however this is how the sysfs group will look
like:
Vince Weaver [Fri, 19 Oct 2012 21:33:38 +0000 (17:33 -0400)]
perf/x86: Remove P6 cpuc->enabled check
Between 2.6.33 and 2.6.34 the PMU code was made modular.
The x86_pmu_enable() call was extended to disable cpuc->enabled
and iterate the counters, enabling one at a time, before calling
enable_all() at the end, followed by re-enabling cpuc->enabled.
Since cpuc->enabled was set to 0, that change effectively caused
the "val |= ARCH_PERFMON_EVENTSEL_ENABLE;" code in p6_pmu_enable_event()
and p6_pmu_disable_event() to be dead code that was never called.
This change removes this code (which was confusing) and adds some
extra commentary to make it more clear what is going on.
Paul Turner [Thu, 4 Oct 2012 11:18:32 +0000 (13:18 +0200)]
sched: Introduce temporary FAIR_GROUP_SCHED dependency for load-tracking
While per-entity load-tracking is generally useful, beyond computing shares
distribution, e.g. runnable based load-balance (in progress), governors,
power-management, etc.
These facilities are not yet consumers of this data. This may be trivially
reverted when the information is required; but avoid paying the overhead for
calculations we will not use until then.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141507.422162369@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Paul Turner [Thu, 4 Oct 2012 11:18:32 +0000 (13:18 +0200)]
sched: Make __update_entity_runnable_avg() fast
__update_entity_runnable_avg forms the core of maintaining an entity's runnable
load average. In this function we charge the accumulated run-time since last
update and handle appropriate decay. In some cases, e.g. a waking task, this
time interval may be much larger than our period unit.
Fortunately we can exploit some properties of our series to perform decay for a
blocked update in constant time and account the contribution for a running
update in essentially-constant* time.
[*]: For any running entity they should be performing updates at the tick which
gives us a soft limit of 1 jiffy between updates, and we can compute up to a
32 jiffy update in a single pass.
C program to generate the magic constants in the arrays:
long runnable_avg_yN_inv[N];
void calc_mult_inv() {
int i;
double yn = 0;
printf("inverses\n");
for (i = 0; i < N; i++) {
yn = (double)WMULT_CONST * pow(y, i);
runnable_avg_yN_inv[i] = yn;
printf("%2d: 0x%8lx\n", i, runnable_avg_yN_inv[i]);
}
printf("\n");
}
long mult_inv(long c, int n) {
return (c * runnable_avg_yN_inv[n]) >> WMULT_SHIFT;
}
void calc_yn_sum(int n)
{
int i;
double sum = 0, sum_fl = 0, diff = 0;
/*
* We take the floored sum to ensure the sum of partial sums is never
* larger than the actual sum.
*/
printf("sum y^n\n");
printf(" %8s %8s %8s\n", "exact", "floor", "error");
for (i = 1; i <= n; i++) {
sum = (y * sum + y * 1024);
sum_fl = floor(y * sum_fl+ y * 1024);
printf("%2d: %8.0f %8.0f %8.0f\n", i, sum, sum_fl,
sum_fl - sum);
}
printf("\n");
}
void calc_conv(long n) {
long old_n;
int i = -1;
printf("convergence (LOAD_AVG_MAX, LOAD_AVG_MAX_N)\n");
do {
old_n = n;
n = mult_inv(n, 1) + 1024;
i++;
} while (n != old_n);
printf("%d> %ld\n", i - 1, n);
printf("\n");
}
Paul Turner [Thu, 4 Oct 2012 11:18:31 +0000 (13:18 +0200)]
sched: Update_cfs_shares at period edge
Now that our measurement intervals are small (~1ms) we can amortize the posting
of update_shares() to be about each period overflow. This is a large cost
saving for frequently switching tasks.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141507.200772172@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Now that running entities maintain their own load-averages the work we must do
in update_shares() is largely restricted to the periodic decay of blocked
entities. This allows us to be a little less pessimistic regarding our
occupancy on rq->lock and the associated rq->clock updates required.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141507.133999170@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Paul Turner [Thu, 4 Oct 2012 11:18:31 +0000 (13:18 +0200)]
sched: Replace update_shares weight distribution with per-entity computation
Now that the machinery in place is in place to compute contributed load in a
bottom up fashion; replace the shares distribution code within update_shares()
accordingly.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141507.061208672@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Paul Turner [Thu, 4 Oct 2012 11:18:31 +0000 (13:18 +0200)]
sched: Maintain runnable averages across throttled periods
With bandwidth control tracked entities may cease execution according to user
specified bandwidth limits. Charging this time as either throttled or blocked
however, is incorrect and would falsely skew in either direction.
What we actually want is for any throttled periods to be "invisible" to
load-tracking as they are removed from the system for that interval and
contribute normally otherwise.
Do this by moderating the progression of time to omit any periods in which the
entity belonged to a throttled hierarchy.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.998912151@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Paul Turner [Thu, 4 Oct 2012 11:18:31 +0000 (13:18 +0200)]
sched: Normalize tg load contributions against runnable time
Entities of equal weight should receive equitable distribution of cpu time.
This is challenging in the case of a task_group's shares as execution may be
occurring on multiple cpus simultaneously.
To handle this we divide up the shares into weights proportionate with the load
on each cfs_rq. This does not however, account for the fact that the sum of
the parts may be less than one cpu and so we need to normalize:
load(tg) = min(runnable_avg(tg), 1) * tg->shares
Where runnable_avg is the aggregate time in which the task_group had runnable
children.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com>. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.930124292@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Paul Turner [Thu, 4 Oct 2012 11:18:31 +0000 (13:18 +0200)]
sched: Compute load contribution by a group entity
Unlike task entities who have a fixed weight, group entities instead own a
fraction of their parenting task_group's shares as their contributed weight.
Compute this fraction so that we can correctly account hierarchies and shared
entity nodes.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.855074415@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Paul Turner [Thu, 4 Oct 2012 11:18:30 +0000 (13:18 +0200)]
sched: Aggregate total task_group load
Maintain a global running sum of the average load seen on each cfs_rq belonging
to each task group so that it may be used in calculating an appropriate
shares:weight distribution.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.792901086@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Paul Turner [Thu, 4 Oct 2012 11:18:30 +0000 (13:18 +0200)]
sched: Account for blocked load waking back up
When a running entity blocks we migrate its tracked load to
cfs_rq->blocked_runnable_avg. In the sleep case this occurs while holding
rq->lock and so is a natural transition. Wake-ups however, are potentially
asynchronous in the presence of migration and so special care must be taken.
We use an atomic counter to track such migrated load, taking care to match this
with the previously introduced decay counters so that we don't migrate too much
load.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.726077467@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Paul Turner [Thu, 4 Oct 2012 11:18:30 +0000 (13:18 +0200)]
sched: Add an rq migration call-back to sched_class
Since we are now doing bottom up load accumulation we need explicit
notification when a task has been re-parented so that the old hierarchy can be
updated.
Adds: migrate_task_rq(struct task_struct *p, int next_cpu)
(The alternative is to do this out of __set_task_cpu, but it was suggested that
this would be a cleaner encapsulation.)
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.660023400@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Paul Turner [Thu, 4 Oct 2012 11:18:30 +0000 (13:18 +0200)]
sched: Maintain the load contribution of blocked entities
We are currently maintaining:
runnable_load(cfs_rq) = \Sum task_load(t)
For all running children t of cfs_rq. While this can be naturally updated for
tasks in a runnable state (as they are scheduled); this does not account for
the load contributed by blocked task entities.
This can be solved by introducing a separate accounting for blocked load:
Obviously we do not want to iterate over all blocked entities to account for
their decay, we instead observe that:
runnable_load(t) = \Sum p_i*y^i
and that to account for an additional idle period we only need to compute:
y*runnable_load(t).
This means that we can compute all blocked entities at once by evaluating:
blocked_load(cfs_rq)` = y * blocked_load(cfs_rq)
Finally we maintain a decay counter so that when a sleeping entity re-awakens
we can determine how much of its load should be removed from the blocked sum.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.585389902@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Paul Turner [Thu, 4 Oct 2012 11:18:30 +0000 (13:18 +0200)]
sched: Aggregate load contributed by task entities on parenting cfs_rq
For a given task t, we can compute its contribution to load as:
task_load(t) = runnable_avg(t) * weight(t)
On a parenting cfs_rq we can then aggregate:
runnable_load(cfs_rq) = \Sum task_load(t), for all runnable children t
Maintain this bottom up, with task entities adding their contributed load to
the parenting cfs_rq sum. When a task entity's load changes we add the same
delta to the maintained sum.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.514678907@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Paul Turner [Thu, 4 Oct 2012 11:18:29 +0000 (13:18 +0200)]
sched: Track the runnable average on a per-task entity basis
Instead of tracking averaging the load parented by a cfs_rq, we can track
entity load directly. With the load for a given cfs_rq then being the sum
of its children.
To do this we represent the historical contribution to runnable average
within each trailing 1024us of execution as the coefficients of a
geometric series.
Where: u_i is the usage in the last i`th 1024us period (approximately 1ms)
~ms and y is chosen such that y^k = 1/2. We currently choose k to be 32 which
roughly translates to about a sched period.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.372695337@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Peter Zijlstra [Tue, 23 Oct 2012 15:47:12 +0000 (17:47 +0200)]
sysctl/sched: Fix 'defined but not used' warning
Since commit ("sched/numa: Implement NUMA home-node selection code")
building a kernel with CONFIG_SMP disabled causes the following
warning:
kernel/sysctl.c:259:12: warning: 'min_sched_tunable_scaling' defined but not used [-Wunused-variable]
kernel/sysctl.c:260:12: warning: 'max_sched_tunable_scaling' defined but not used [-Wunused-variable]
Reported-by: Fabio Estevam <fabio.estevam@freescale.com>
[ Ingo preferred extra #ifdef variant over the __maybe_unused ] Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/n/tip-p9w5w57ylinrj9zakvhc5zay@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Dave Young [Thu, 18 Oct 2012 06:33:23 +0000 (14:33 +0800)]
Revert "x86/mm: Fix the size calculation of mapping tables"
Commit:
722bc6b16771 x86/mm: Fix the size calculation of mapping tables
Tried to address the issue that the first 2/4M should use 4k pages
if PSE enabled, but extra counts should only be valid for x86_32.
This commit caused a kdump regression: the kdump kernel hangs.
Work is in progress to fundamentally fix the various page table
initialization issues that we have, via the design suggested
by H. Peter Anvin, but it's not ready yet to be merged.
So, to get a working kdump revert to the last known working version,
which is the revert of this commit and of a followup fix (which was
incomplete):
bd2753b2dda7 x86/mm: Only add extra pages count for the first memory range during pre-allocation
Tested kdump on physical and virtual machines.
Signed-off-by: Dave Young <dyoung@redhat.com> Acked-by: Yinghai Lu <yinghai@kernel.org> Acked-by: Cong Wang <xiyou.wangcong@gmail.com> Acked-by: Flavio Leitner <fbl@redhat.com> Tested-by: Flavio Leitner <fbl@redhat.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Flavio Leitner <fbl@redhat.com> Cc: Tejun Heo <tj@kernel.org> Cc: ianfang.cn@gmail.com Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: <stable@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Ingo Molnar [Wed, 24 Oct 2012 07:04:41 +0000 (09:04 +0200)]
numa, sched: Eliminate unused functions
Andrew Morton reported these allnoconfig warnings:
kernel/sched/fair.c:800: warning: 'task_h_load' declared 'static' but never defined
kernel/sched/fair.c:1004: warning: 'account_numa_enqueue' defined but not used
These are only used on CONFIG_SMP - fix it.
We should eventually resolve the Kconfig complexities here by turning
SMP (and NUMA) scheduling either into a separate source code file, or
by creating a single-model scheduler, which happens to build to a small
object file on !CONFIG_SMP or !CONFIG_NUMA kernels not via #ifdefs but
via more clever build time code elimination and zero-size data fields.
That's not a simple patch.
Reported-by: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Rik van Riel <riel@redhat.com> Link: http://lkml.kernel.org/n/tip-wyctbug9qKulTs0umsxjyixi@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Andre Przywara [Tue, 9 Oct 2012 15:38:35 +0000 (17:38 +0200)]
x86/perf: Fix virtualization sanity check
In check_hw_exists() we try to detect non-emulated MSR accesses
by writing an arbitrary value into one of the PMU registers
and check if it's value after a readout is still the same.
This algorithm silently assumes that the register does not contain
the magic value already, which is wrong in at least one situation.
Fix the algorithm to really do a read-modify-write cycle. This fixes
a warning under Xen under some circumstances on AMD family 10h CPUs.
The reasons in more details actually sound like a story from
Believe It or Not!:
First you need an AMD family 10h/12h CPU. These do not reset the
PERF_CTR registers on a reboot.
Now you boot bare metal Linux, which goes successfully through this
check, but leaves the magic value of 0xabcd in the register. You
don't use the performance counters, but do a reboot (warm reset).
Then you choose to boot Xen. The check will be triggered with a
recent Linux kernel as Dom0 again, trying to write 0xabcd into the
MSR. Xen silently drops the write (expected), but the subsequent read
will return the value in the register, which just happens to be the
expected magic value. Thus the test misleadingly succeeds, leaving
the kernel in the belief that the PMU is available. This will trigger
the following message:
Peter Zijlstra [Mon, 22 Oct 2012 17:21:32 +0000 (19:21 +0200)]
numa, sched, mm: Fix NULL-ptr deref
Dan reported that there's a possible NULL pointer deref in this logic,
fix that. Further fix it to avoid a possible inf. loop (completely
unlikely in the case no vma is migratable). Also don't endlessly loop
on large length, simply truncate at the end of the address-space and
restart on the next go.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Rik van Riel <riel@redhat.com> Link: http://lkml.kernel.org/n/tip-mnkio02xxtttiepsg9ek6qkw@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Peter Zijlstra [Mon, 22 Oct 2012 18:15:40 +0000 (20:15 +0200)]
numa, sched: Implement slow start for working set sampling
Add a 1 second delay before starting to scan the working set of
a task and starting to balance it amongst nodes.
The theory is that short-run tasks benefit very little from NUMA
placement: they come and go, and they better stick to the node
they were started on. As tasks mature and rebalance to other CPUs
and nodes, so does their NUMA placement have to change and so
does it start to matter more and more.
In practice this change fixes an observable kbuild regression:
# [ a perf stat --null --repeat 10 test of ten bzImage builds to /dev/shm ]
!NUMA:
45.291088843 seconds time elapsed ( +- 0.40% )
45.154231752 seconds time elapsed ( +- 0.36% )
+NUMA, no slow start:
46.172308123 seconds time elapsed ( +- 0.30% )
46.343168745 seconds time elapsed ( +- 0.25% )
The implementation is simple and straightforward, most of the patch
deals with adding the /proc/sys/kernel/sched_numa_scan_delay_ms tunable
knob and with renaming task_period to scan_period.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Rik van Riel <riel@redhat.com> Link: http://lkml.kernel.org/n/tip-vn7p3ynbwqt3qqewhdlvjltc@git.kernel.org
[ Wrote the changelog, ran measurements, tuned the default. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
Ingo Molnar [Wed, 24 Oct 2012 05:47:40 +0000 (07:47 +0200)]
Merge tag 'perf-urgent-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent
Pull perf/urgent fixes from Arnaldo Carvalho de Melo:
* Validate syscall id before growing syscall table in 'trace', fixing potential
excessive memory usage.
* Validate perf_sample.raw_data, making 'trace' more robust, avoiding some
potential SEGFAULTs when reading tracepoint fields.
* Fix exclude_guest parse events 'perf test's, from Jiri Olsa.
* Do not flush maps on COMM, that is sent by the kernel when a process is
exec'ed, but also when a process changes its name. Since we were assuming
a COMM always meant an EXEC, we were losing track of a process maps by
flushing its maps. Fix from Luigi Semenzato.
* A recent patch introduced a problem by not initializing what should be
the first kind of pager to use, 'man', instead it was being left as zero
which means no pager. This caused 'perf subcmd --help' to produce no output.
Fix from Namhyung Kim.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Linus Torvalds [Wed, 24 Oct 2012 02:17:27 +0000 (05:17 +0300)]
Merge tag 'stable/for-linus-3.7-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen
Pull xen bug-fixes from Konrad Rzeszutek Wilk:
- Fix mysterious SIGSEGV or SIGKILL in applications due to corrupting
of the %eip when returning from a signal handler.
- Fix various ARM compile issues after the merge fallout.
- Continue on making more of the Xen generic code usable by ARM
platform.
- Fix SR-IOV passthrough to mirror multifunction PCI devices.
- Fix various compile warnings.
- Remove hypercalls that don't exist anymore.
* tag 'stable/for-linus-3.7-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen:
xen: dbgp: Fix warning when CONFIG_PCI is not enabled.
xen: arm: comment on why 64-bit xen_pfn_t is safe even on 32 bit
xen: balloon: use correct type for frame_list
xen/x86: don't corrupt %eip when returning from a signal handler
xen: arm: make p2m operations NOPs
xen: balloon: don't include e820.h
xen: grant: use xen_pfn_t type for frame_list.
xen: events: pirq_check_eoi_map is X86 specific
xen: XENMEM_translate_gpfn_list was remove ages ago and is unused.
xen: sysfs: fix build warning.
xen: sysfs: include err.h for PTR_ERR etc
xen: xenbus: quirk uses x86 specific cpuid
xen PV passthru: assign SR-IOV virtual functions to separate virtual slots
xen/xenbus: Fix compile warning.
xen/x86: remove duplicated include from enlighten.c
Linus Torvalds [Wed, 24 Oct 2012 01:07:51 +0000 (04:07 +0300)]
Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf fixes from Ingo Molnar:
"Most of these are uprobes race fixes from Oleg, and their preparatory
cleanups. (It's larger than what I'd normally send for an -rc kernel,
but they looked significant enough to not delay them.)
There's also an oprofile fix and an uncore PMU fix."
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (22 commits)
perf/x86: Disable uncore on virtualized CPUs
oprofile, x86: Fix wrapping bug in op_x86_get_ctrl()
ring-buffer: Check for uninitialized cpu buffer before resizing
uprobes: Fix the racy uprobe->flags manipulation
uprobes: Fix prepare_uprobe() race with itself
uprobes: Introduce prepare_uprobe()
uprobes: Fix handle_swbp() vs unregister() + register() race
uprobes: Do not delete uprobe if uprobe_unregister() fails
uprobes: Don't return success if alloc_uprobe() fails
uprobes/x86: Only rep+nop can be emulated correctly
uprobes: Simplify is_swbp_at_addr(), remove stale comments
uprobes: Kill set_orig_insn()->is_swbp_at_addr()
uprobes: Introduce copy_opcode(), kill read_opcode()
uprobes: Kill set_swbp()->is_swbp_at_addr()
uprobes: Restrict valid_vma(false) to skip VM_SHARED vmas
uprobes: Change valid_vma() to demand VM_MAYEXEC rather than VM_EXEC
uprobes: Change write_opcode() to use FOLL_FORCE
uprobes: Move clear_thread_flag(TIF_UPROBE) to uprobe_notify_resume()
uprobes: Kill UTASK_BP_HIT state
uprobes: Fix UPROBE_SKIP_SSTEP checks in handle_swbp()
...
Linus Torvalds [Wed, 24 Oct 2012 01:07:02 +0000 (04:07 +0300)]
Merge branch 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull core kernel fixes from Ingo Molnar:
"Two small fixes"
* 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
Documentation: Reflect the new location of the NMI watchdog info
nohz: Fix idle ticks in cpu summary line of /proc/stat
Linus Torvalds [Wed, 24 Oct 2012 01:05:56 +0000 (04:05 +0300)]
Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 fixes from Martin Schwidefsky:
"Among the usual minor bug fixes the more interesting patches are the
perf counters for the latest machine, the missing select to enable
transparent huge pages and a build fix for the UAPI rework."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390,uapi: do not use uapi/asm-generic/kvm_para.h
s390/cache: fix data/instruction cache output
s390: fix linker script for 31 bit builds
s390/thp: select HAVE_ARCH_TRANSPARENT_HUGEPAGE
s390/kdump: Use 64 bit mode for 0x10000 entry point
perf_cpum_cf: Add support for counters available with IBM zEC12
s390/css: stop stsch loop after cc 3
s390/cio: use generic bitmap functions
s390/chpid: make headers usable (again)
Linus Torvalds [Wed, 24 Oct 2012 01:05:15 +0000 (04:05 +0300)]
Merge branch 'stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile
Pull tile fixes from Chris Metcalf:
"This fixes one issue with compiler flags that can cause modules not to
load, and cleans up some warnings with ELF_R_xxx defines."
* 'stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile:
arch/tile: avoid build warnings from duplicate ELF_R_xxx #defines
arch/tile: avoid generating .eh_frame information in modules
Chris Metcalf [Fri, 19 Oct 2012 15:43:11 +0000 (11:43 -0400)]
arch/tile: avoid generating .eh_frame information in modules
The tile tool chain uses the .eh_frame information for backtracing.
The vmlinux build drops any .eh_frame sections at link time, but when
present in kernel modules, it causes a module load failure due to the
presence of unsupported pc-relative relocations. When compiling to
use compiler feedback support, the compiler by default omits .eh_frame
information, so we don't see this problem. But when not using feedback,
we need to explicitly suppress the .eh_frame.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com> Cc: stable@vger.kernel.org
Rik van Riel [Thu, 18 Oct 2012 21:20:21 +0000 (17:20 -0400)]
numa, mm: Rename the PROT_NONE fault handling functions to *_numa()
Having the function name indicate what the function is used
for makes the code a little easier to read. Furthermore,
the fault handling code largely consists of do_...._page
functions.
Rename the NUMA working set sampling fault handling functions
to _numa() names, to indicate what they are used for.
This separates the naming from the regular PROT_NONE namings.
Andrea Arcangeli [Tue, 18 Sep 2012 00:14:51 +0000 (02:14 +0200)]
x86, mm: Prevent gcc to re-read the pagetables
GCC is very likely to read the pagetables just once and cache them in
the local stack or in a register, but it is can also decide to re-read
the pagetables. The problem is that the pagetable in those places can
change from under gcc.
In the page fault we only hold the ->mmap_sem for reading and both the
page fault and MADV_DONTNEED only take the ->mmap_sem for reading and we
don't hold any PT lock yet.
In get_user_pages_fast() the TLB shootdown code can clear the pagetables
before firing any TLB flush (the page can't be freed until the TLB
flushing IPI has been delivered but the pagetables will be cleared well
before sending any TLB flushing IPI).
With THP/hugetlbfs the pmd (and pud for hugetlbfs giga pages) can
change as well under gup_fast, it won't just be cleared for the same
reasons described above for the pte in the page fault case.
[ This patch was picked up from the AutoNUMA tree. ]
Originally-by: Andrea Arcangeli <aarcange@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Rik van Riel <riel@redhat.com>
[ Ported to this tree, because we are modifying the page tables
at a high rate here, so this problem is potentially more
likely to show up in practice. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
Mel Gorman [Wed, 12 Oct 2011 19:06:51 +0000 (21:06 +0200)]
mm: Check if PTE is already allocated during page fault
With transparent hugepage support, handle_mm_fault() has to be careful
that a normal PMD has been established before handling a PTE fault. To
achieve this, it used __pte_alloc() directly instead of pte_alloc_map
as pte_alloc_map is unsafe to run against a huge PMD. pte_offset_map()
is called once it is known the PMD is safe.
pte_alloc_map() is smart enough to check if a PTE is already present
before calling __pte_alloc but this check was lost. As a consequence,
PTEs may be allocated unnecessarily and the page table lock taken.
Thi useless PTE does get cleaned up but it's a performance hit which
is visible in page_test from aim9.
This patch simply re-adds the check normally done by pte_alloc_map to
check if the PTE needs to be allocated before taking the page table
lock. The effect is noticable in page_test from aim9.
(While this affects 2.6.38, it is a performance rather than a
functional bug and normally outside the rules -stable. While the big
performance differences are to a microbench, the difference in fork
and exec performance may be significant enough that -stable wants to
consider the patch)
Reported-by: Raz Ben Yehuda <raziebe@gmail.com> Signed-off-by: Mel Gorman <mgorman@suse.de> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Rik van Riel <riel@redhat.com>
[ Picked this up from the AutoNUMA tree to help
it upstream and to allow apples-to-apples
performance comparisons. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
Linus Torvalds [Tue, 23 Oct 2012 05:51:07 +0000 (08:51 +0300)]
Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux
Pull drm fixes from Dave Airlie:
"Fixes for intel and nouveau mainly.
- intel: disable HSW by default, sdvo fixes, link train regression
fix
- nouveau: acpi rom loading regression fix, with a few other fixes
from the rework
-core: just other minor fixes and race fixes for ttm."
* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux: (24 commits)
drm/ttm: Fix a theoretical race in ttm_bo_cleanup_refs()
drm/ttm: Fix a theoretical race
drm: platform: Don't initialize driver-private data
drm/debugfs: remove redundant info from gem_names
drm: fb: cma: Fail gracefully on allocation failure
drm: fb: cma: Fix typo in debug message
drm/nouveau/clock: fix missing pll type/addr when matching default entry
drm/nouveau/fb: fix reporting of memory type on GF8+ IGPs
drm/nv41/vm: don't init hw pciegart on boards with agp bridge
drm/nouveau/bios: fetch full 4KiB block to determine ACPI ROM image size
drm/nouveau: validate vbios size
drm/nouveau: warn when trying to free mm which is still in use
drm/nouveau: fix nouveau_mm/nouveau_mm_node leak
drm/nouveau/bios: improve error handling when reading the vbios from ACPI
drm/nouveau: handle same-fb page flips
drm/i915: Initialize obj->pages before use by i915_gem_object_do_bit17_swizzle()
drm/i915: Add no-lvds quirk for Supermicro X7SPA-H
drm/i915: Insert i915_preliminary_hw_support variable.
drm/i915: shut up spurious WARN in the gtt fault handler
Revert "drm/i915: Try harder to complete DP training pattern 1"
...
Linus Torvalds [Tue, 23 Oct 2012 05:48:26 +0000 (08:48 +0300)]
Merge tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4
Pull ext4 fixes from Ted Ts'o:
"Various bug fixes for ext4. The most serious of them fixes a security
bug (CVE-2012-4508) which leads to stale data exposure when we have
fallocate racing against writes to files undergoing delayed
allocation. We also have two fixes for the metadata checksum feature,
the most serious of which can cause the superblock to have a invalid
checksum after a power failure."
* tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4:
ext4: Avoid underflow in ext4_trim_fs()
ext4: Checksum the block bitmap properly with bigalloc enabled
ext4: fix undefined bit shift result in ext4_fill_flex_info
ext4: fix metadata checksum calculation for the superblock
ext4: race-condition protection for ext4_convert_unwritten_extents_endio
ext4: serialize fallocate with ext4_convert_unwritten_extents
Linus Torvalds [Tue, 23 Oct 2012 05:47:38 +0000 (08:47 +0300)]
Merge tag 'nfs-for-3.7-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
Pull NFS client bugfixes from Trond Myklebust:
- Do not call pnfs_return_layout() from an rpciod context
- nfs4_ds_disconnect can cause Oopses. Kill it...
- Fix the return value for nfs_callback_start_svc
- Fix a number of compile warnings
* tag 'nfs-for-3.7-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs:
NFSv4: Fix the return value for nfs_callback_start_svc
NFSv4.1: Declare osd_pri_2_pnfs_err(), objio_init_read/write to be static
NFSv4: fs/nfs/nfs4getroot.c needs to include "internal.h"
NFSv4.1: Use kcalloc() to allocate zeroed arrays instead of kzalloc()
NFSv4.1: Do not call pnfs_return_layout() from an rpciod context
NFSv4.1: Kill nfs4_ds_disconnect()
Thomas Hellstrom [Mon, 22 Oct 2012 12:51:26 +0000 (12:51 +0000)]
drm/ttm: Fix a theoretical race in ttm_bo_cleanup_refs()
In theory, that function could release the lru lock between
checking for bo on ddestroy list and a successful reserve if the bo
was already reserved, and the function was called with waiting reserves
allowed.
However, all current reservers of a bo on the ddestroy list would
atomically take the bo off the list after a successful reserve so this
race should not have been hit, so no need to backport for stable.
This patch also fixes a case found by Maarten Lankhorst where
ttm_mem_evict_first called with no_wait_gpu would incorrectly
spin waiting for bo idle if trying to evict a busy buffer that
also sits on the ddestroy list.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
Thomas Hellstrom [Mon, 22 Oct 2012 12:51:25 +0000 (12:51 +0000)]
drm/ttm: Fix a theoretical race
The ttm_mem_evict_first function could theoretically drop the
lru lock without retrying if a reservation from off the LRU list
ended up waiting.
However, since currently there are no users that could cause a wait
in that situation so this is not suitable for stable
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Dave Airlie <airlied@redhat.com>