Peter Zijlstra [Fri, 20 Feb 2015 13:05:38 +0000 (14:05 +0100)]
perf: Add per event clockid support
While thinking on the whole clock discussion it occurred to me we have
two distinct uses of time:
1) the tracking of event/ctx/cgroup enabled/running/stopped times
which includes the self-monitoring support in struct
perf_event_mmap_page.
2) the actual timestamps visible in the data records.
And we've been conflating them.
The first is all about tracking time deltas, nobody should really care
in what time base that happens, its all relative information, as long
as its internally consistent it works.
The second however is what people are worried about when having to
merge their data with external sources. And here we have the
discussion on MONOTONIC vs MONOTONIC_RAW etc..
Where MONOTONIC is good for correlating between machines (static
offset), MONOTNIC_RAW is required for correlating against a fixed rate
hardware clock.
This means configurability; now 1) makes that hard because it needs to
be internally consistent across groups of unrelated events; which is
why we had to have a global perf_clock().
However, for 2) it doesn't really matter, perf itself doesn't care
what it writes into the buffer.
The below patch makes the distinction between these two cases by
adding perf_event_clock() which is used for the second case. It
further makes this configurable on a per-event basis, but adds a few
sanity checks such that we cannot combine events with different clocks
in confusing ways.
And since we then have per-event configurability we might as well
retain the 'legacy' behaviour as a default.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: John Stultz <john.stultz@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Ingo Molnar [Fri, 27 Mar 2015 09:09:21 +0000 (10:09 +0100)]
Merge branch 'timers/core' into perf/timer, to apply dependent patch
An upcoming patch will depend on tai_ns() and NMI-safe ktime_get_raw_fast(),
so merge timers/core here in a separate topic branch until it's all cooked
and timers/core is merged upstream.
Peter Zijlstra [Thu, 19 Mar 2015 09:09:06 +0000 (10:09 +0100)]
time: Rename timekeeper::tkr to timekeeper::tkr_mono
In preparation of adding another tkr field, rename this one to
tkr_mono. Also rename tk_read_base::base_mono to tk_read_base::base,
since the structure is not specific to CLOCK_MONOTONIC and the mono
name got added to the tk_read_base instance.
Lots of trivial churn.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: John Stultz <john.stultz@linaro.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20150319093400.344679419@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Andi Kleen [Wed, 18 Feb 2015 02:18:06 +0000 (18:18 -0800)]
perf/x86/intel: Add INST_RETIRED.ALL workarounds
On Broadwell INST_RETIRED.ALL cannot be used with any period
that doesn't have the lowest 6 bits cleared. And the period
should not be smaller than 128.
BDM11: When using a period < 100; we may get incorrect PEBS/PMI
interrupts and/or an invalid counter state.
BDM55: When bit0-5 of the period are !0 we may get redundant PEBS
records on overflow.
Add a new callback to enforce this, and set it for Broadwell.
How does this handle the case when an app requests a specific
period with some of the bottom bits set?
Short answer:
Any useful instruction sampling period needs to be 4-6 orders
of magnitude larger than 128, as an PMI every 128 instructions
would instantly overwhelm the system and be throttled.
So the +-64 error from this is really small compared to the
period, much smaller than normal system jitter.
Long answer (by Peterz):
IFF we guarantee perf_event_attr::sample_period >= 128.
Suppose we start out with sample_period=192; then we'll set period_left
to 192, we'll end up with left = 128 (we truncate the lower bits). We
get an interrupt, find that period_left = 64 (>0 so we return 0 and
don't get an overflow handler), up that to 128. Then we trigger again,
at n=256. Then we find period_left = -64 (<=0 so we return 1 and do get
an overflow). We increment with sample_period so we get left = 128. We
fire again, at n=384, period_left = 0 (<=0 so we return 1 and get an
overflow). And on and on.
So while the individual interrupts are 'wrong' we get then with
interval=256,128 in exactly the right ratio to average out at 192. And
this works for everything >=128.
So the num_samples*fixed_period thing is still entirely correct +- 127,
which is good enough I'd say, as you already have that error anyhow.
So no need to 'fix' the tools, al we need to do is refuse to create
INST_RETIRED:ALL events with sample_period < 128.
Andi Kleen [Wed, 18 Feb 2015 02:18:05 +0000 (18:18 -0800)]
perf/x86/intel: Add Broadwell core support
Add Broadwell support for Broadwell to perf.
The basic support is very similar to Haswell. We use the new cache
event list added for Haswell earlier. The only differences
are a few bits related to remote nodes. To avoid an extra,
mostly identical, table these are patched up in the initialization code.
The constraint list has one new event that needs to be handled over Haswell.
Andi Kleen [Wed, 18 Feb 2015 02:18:04 +0000 (18:18 -0800)]
perf/x86/intel: Add new cache events table for Haswell
Haswell offcore events are quite different from Sandy Bridge.
Add a new table to handle Haswell properly.
Note that the offcore bits listed in the SDM are not quite correct
(this is currently being fixed). An uptodate list of bits is
in the patch.
The basic setup is similar to Sandy Bridge. The prefetch columns
have been removed, as prefetch counting is not very reliable
on Haswell. One L1 event that is not in the event list anymore
has been also removed.
- data reads do not include code reads (comparable to earlier Sandy Bridge tables)
- data counts include speculative execution (except L1 write, dtlb, bpu)
- remote node access includes both remote memory, remote cache, remote mmio.
- prefetches are not included in the counts for consistency
(different from Sandy Bridge, which includes prefetches in the remote node)
Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Daniel Thompson <daniel.thompson@linaro.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Stephen Boyd <sboyd@codeaurora.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Daniel Thompson [Thu, 26 Mar 2015 19:23:26 +0000 (12:23 -0700)]
timers, sched/clock: Avoid deadlock during read from NMI
Currently it is possible for an NMI (or FIQ on ARM) to come in
and read sched_clock() whilst update_sched_clock() has locked
the seqcount for writing. This results in the NMI handler
locking up when it calls raw_read_seqcount_begin().
This patch fixes the NMI safety issues by providing banked clock
data. This is a similar approach to the one used in Thomas
Gleixner's 4396e058c52e("timekeeping: Provide fast and NMI safe
access to CLOCK_MONOTONIC").
Suggested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: John Stultz <john.stultz@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/1427397806-20889-6-git-send-email-john.stultz@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Daniel Thompson [Thu, 26 Mar 2015 19:23:25 +0000 (12:23 -0700)]
timers, sched/clock: Remove redundant notrace from update function
Currently update_sched_clock() is marked as notrace but this
function is not called by ftrace. This is trivially fixed by
removing the mark up.
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: John Stultz <john.stultz@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/1427397806-20889-5-git-send-email-john.stultz@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Daniel Thompson [Thu, 26 Mar 2015 19:23:24 +0000 (12:23 -0700)]
timers, sched/clock: Remove suspend from clock_read_data()
Currently cd.read_data.suspended is read by the hotpath function
sched_clock(). This variable need not be accessed on the
hotpath. In fact, once it is removed, we can remove the
conditional branches from sched_clock() and install a dummy
read_sched_clock function to suspend the clock.
The new master copy of the function pointer
(actual_read_sched_clock) is introduced and is used for all
reads of the clock hardware except those within sched_clock
itself.
Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: John Stultz <john.stultz@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/1427397806-20889-4-git-send-email-john.stultz@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Daniel Thompson [Thu, 26 Mar 2015 19:23:23 +0000 (12:23 -0700)]
timers, sched/clock: Optimize cache line usage
Currently sched_clock(), a very hot code path, is not optimized
to minimise its cache profile. In particular:
1. cd is not ____cacheline_aligned,
2. struct clock_data does not distinguish between hotpath and
coldpath data, reducing locality of reference in the hotpath,
3. Some hotpath data is missing from struct clock_data and is marked
__read_mostly (which more or less guarantees it will not share a
cache line with cd).
This patch corrects these problems by extracting all hotpath
data into a separate structure and using ____cacheline_aligned
to ensure the hotpath uses a single (64 byte) cache line.
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: John Stultz <john.stultz@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/1427397806-20889-3-git-send-email-john.stultz@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Daniel Thompson [Thu, 26 Mar 2015 19:23:22 +0000 (12:23 -0700)]
timers, sched/clock: Match scope of read and write seqcounts
Currently the scope of the raw_write_seqcount_begin/end() in
sched_clock_register() far exceeds the scope of the read section
in sched_clock(). This gives the impression of safety during
cursory review but achieves little.
Note that this is likely to be a latent issue at present because
sched_clock_register() is typically called before we enable
interrupts, however the issue does risk bugs being needlessly
introduced as the code evolves.
This patch fixes the problem by increasing the scope of the read
locking performed by sched_clock() to cover all data modified by
sched_clock_register.
We also improve clarity by moving writes to struct clock_data
that do not impact sched_clock() outside of the critical
section.
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
[ Reworked it slightly to apply to tip/timers/core] Signed-off-by: John Stultz <john.stultz@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/1427397806-20889-2-git-send-email-john.stultz@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Linus Torvalds [Thu, 26 Mar 2015 22:04:05 +0000 (15:04 -0700)]
Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux
Pull drm refcounting fixes from Dave Airlie:
"Here is the complete set of i915 bug/warn/refcounting fixes"
* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux:
drm/i915: Fixup legacy plane->crtc link for initial fb config
drm/i915: Fix atomic state when reusing the firmware fb
drm/i915: Keep ring->active_list and ring->requests_list consistent
drm/i915: Don't try to reference the fb in get_initial_plane_config()
drm: Fixup racy refcounting in plane_force_disable
Linus Torvalds [Thu, 26 Mar 2015 21:53:47 +0000 (14:53 -0700)]
Merge tag 'dm-4.0-fix-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
Pull device mapper fix from Mike Snitzer:
"Fix DM core device cleanup regression -- due to a latent race that was
exposed by the bdi changes that were introduced during the 4.0 merge"
* tag 'dm-4.0-fix-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
dm: fix add_disk() NULL pointer due to race with free_dev()
Linus Torvalds [Thu, 26 Mar 2015 21:43:42 +0000 (14:43 -0700)]
Merge tag 'linux-kselftest-4.0-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest
Pull kselftest fix from Shuah Khan.
* tag 'linux-kselftest-4.0-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest:
selftests: Fix build failures when invoked from kselftest target
Dave Airlie [Thu, 26 Mar 2015 21:39:45 +0000 (07:39 +1000)]
Merge tag 'drm-intel-fixes-2015-03-26' of git://anongit.freedesktop.org/drm-intel into drm-fixes
This should cover the final warnings in -rc5 with two more backports
from our development branch (drm-intel-next-queued). They're the ones
from Daniel and Damien, with references to the reports.
This is on top of drm-fixes because of the dependency on the two earlier
fixes not yet in Linus' tree.
There's an additional regression fix from Chris.
* tag 'drm-intel-fixes-2015-03-26' of git://anongit.freedesktop.org/drm-intel:
drm/i915: Fixup legacy plane->crtc link for initial fb config
drm/i915: Fix atomic state when reusing the firmware fb
drm/i915: Keep ring->active_list and ring->requests_list consistent
Linus Torvalds [Thu, 26 Mar 2015 21:11:17 +0000 (14:11 -0700)]
Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 fixes from Martin Schwidefsky:
"A couple of bug fixes for s390.
The ftrace comile fix is quite large for a -rc6 release, but it would
be nice to have it in 4.0"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390/smp: reenable smt after resume
s390/mm: limit STACK_RND_MASK for compat tasks
s390/ftrace: fix compile error if CONFIG_KPROBES is disabled
s390/cpum_sf: add diagnostic sampling event only if it is authorized
Steven Rostedt [Tue, 24 Mar 2015 18:58:13 +0000 (14:58 -0400)]
tools lib traceevent: Zero should not be considered "not found" in eval_flag()
Guilherme Cox found that:
There is, however, a potential bug if there is an item with code zero
that is not the first one in the symbol list, since eval_flag(..)
returns 0 when it doesn't find anything.
If none of the enums are known to pevent, then eval_flag() will return
zero, and it will match it to the first item in the list, which would be
FOO_GO, which is not zero.
Luckily, in most cases, the first element would be zero, and the parsing
would match out of sheer luck.
Reported-by: Guilherme Cox <cox@computer.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/20150324145813.0bfe95ba@gandalf.local.home Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf trace: Handle multiple threads better wrt syscalls being intermixed
Introduced a bug where it considered the number of bytes output directly
to the output file when formatting the syscall entry buffer that is
stored to be finally printed at syscall exit, ending up leaving garbage
at the start of syscalls that appeared while another syscall was being
processed, in another thread. Fix it.
Example of garbage in the output before this patch:
David Ahern [Tue, 24 Mar 2015 16:10:55 +0000 (12:10 -0400)]
perf tools: Set JOBS based on CPU or processor
Number of JOBS to use is set automatically to the number of processors found
in /proc/cpuinfo. SPARC uses 'CPU' lines rather than 'processor'. Update the
check in perf's Makefile to work for SPARC.
perf evlist: Return the first evsel with an invalid filter in apply_filters()
Use of a bad filter currently generates the message:
Error: failed to set filter with 22 (Invalid argument)
Add the event name to make it clear to which event the filter
failed to apply:
Error: Failed to set filter "foo" on event sched:sg_lb_stats: 22: Invalid argument
To test it use something like:
# perf record -e sched:sched_switch -e sched:*fork --filter parent_pid==1 -e sched:*wait* --filter bla usleep 1
Error: failed to set filter "bla" on event sched:sched_stat_iowait with 22 (Invalid argument)
#
Based-on-a-patch-by: David Ahern <dsahern@gmail.com> Acked-by: David Ahern <dsahern@gmail.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-d7gq2fjvaecozp9o2i0siifu@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
David Ahern [Tue, 24 Mar 2015 20:14:09 +0000 (16:14 -0400)]
perf timechart: Fix SIBGUS error on sparc64
perf timechart -T on sparc64 is terminating due to SIGBUS. Backtrace:
Program received signal SIGBUS, Bus error.
0x0000000000173d7c in perf_evsel__intval (evsel=<value optimized out>, sample=0x7feffffda28, name=0x289b28 "prev_state")
at util/evsel.c:1918
1918 util/evsel.c: No such file or directory.
in util/evsel.c
Missing separate debuginfos, use: debuginfo-install audit-libs-2.3.7-1.0.1.el6.sparc64 bzip2-libs-1.0.5-7.el6_0.sparc64 elfutils-libelf-0.155-2.0.3.el6.sparc64 elfutils-libs-0.155-2.0.3.el6.sparc64 glibc-2.12-1.132.0.8.el6_5.sparc64 numactl-2.0.7-8.el6.sparc64 python-libs-2.6.6-52.0.2.el6.sparc64 slang-2.2.1-1.el6.sparc64 xz-libs-4.999.9-0.3.beta.20091007git.el6.sparc64 zlib-1.2.3-29.el6.sparc64
(gdb) bt
0 0x0000000000173d7c in perf_evsel__intval (evsel=<value optimized out>, sample=0x7feffffda28,
name=0x289b28 "prev_state") at util/evsel.c:1918
1 0x0000000000123b94 in process_sample_sched_switch (tchart=0x7feffffe040, evsel=0x4ca850, sample=0x7feffffda28,
backtrace=0xc39010 "") at builtin-timechart.c:627
2 0x0000000000122828 in process_sample_event (tool=0x7feffffe040, event=<value optimized out>, sample=0x7feffffda28,
evsel=0x4ca850, machine=0x4c9c88) at builtin-timechart.c:569
Another extended load on unaligned pointer. As before fix by copying to
a temporary variable using memcpy.
drm/i915: Fix modeset state confusion in the load detect code
But this time around it was the initial fb code that forgot to update
the plane->crtc pointer. Otherwise it's the exact same bug, with the
exact same restrains (any set_config call/ioctl that doesn't disable
the pipe papers over the bug for free, so fairly hard to hit in normal
testing). So if you want the full explanation just go read that one
over there - it's rather long ...
Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Josh Boyer <jwboyer@fedoraproject.org> Cc: Jani Nikula <jani.nikula@linux.intel.com> Reported-and-tested-by: Josh Boyer <jwboyer@fedoraproject.org> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
[Jani: backported to drm-intel-fixes for v4.0-rc]
Reference: http://mid.gmane.org/CA+5PVA7ChbtJrknqws1qvZcbrg1CW2pQAFkSMURWWgyASRyGXg@mail.gmail.com Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Chris Wilson [Wed, 18 Mar 2015 18:19:22 +0000 (18:19 +0000)]
drm/i915: Keep ring->active_list and ring->requests_list consistent
If we retire requests last, we may use a later seqno and so clear
the requests lists without clearing the active list, leading to
confusion. Hence we should retire requests first for consistency with
the early return. The order used to be important as the lifecycle for
the object on the active list was determined by request->seqno. However,
the requests themselves are now reference counted removing the
constraint from the order of retirement.
drm/i915: Convert 'i915_seqno_passed' calls into 'i915_gem_request_completed
'
and a
WARNING: CPU: 0 PID: 1383 at drivers/gpu/drm/i915/i915_gem_evict.c:279 i915_gem_evict_vm+0x10c/0x140()
WARN_ON(!list_empty(&vm->active_list))
Identified by updating WATCH_LISTS:
[drm:i915_verify_lists] *ERROR* blitter ring: active list not empty, but no requests
WARNING: CPU: 0 PID: 681 at drivers/gpu/drm/i915/i915_gem.c:2751 i915_gem_retire_requests_ring+0x149/0x230()
WARN_ON(i915_verify_lists(ring->dev))
Note that this is only a problem in evict_vm where the following happens
after a retire_request has cleaned out all requests, but not all active
bo:
- intel_ring_idle called from i915_gpu_idle notices that no requests are
outstanding and immediately returns.
- i915_gem_retire_requests_ring called from i915_gem_retire_requests also
immediately returns when there's no request, still leaving the bo on the
active list.
- evict_vm hits the WARN_ON(!list_empty(&vm->active_list)) after evicting
all active objects that there's still stuff left that shouldn't be
there.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Linus Torvalds [Wed, 25 Mar 2015 23:21:17 +0000 (16:21 -0700)]
Merge branch 'akpm' (patches from Andrew)
Merge misc fixes from Andrew Morton:
"15 fixes"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
mm: numa: mark huge PTEs young when clearing NUMA hinting faults
mm: numa: slow PTE scan rate if migration failures occur
mm: numa: preserve PTE write permissions across a NUMA hinting fault
mm: numa: group related processes based on VMA flags instead of page table flags
hfsplus: fix B-tree corruption after insertion at position 0
MAINTAINERS: add Jan as DMI/SMBIOS support maintainer
fs/affs/file.c: unlock/release page on error
mm/page_alloc.c: call kernel_map_pages in unset_migrateype_isolate
mm/slub: fix lockups on PREEMPT && !SMP kernels
mm/memory hotplug: postpone the reset of obsolete pgdat
MAINTAINERS: correct rtc armada38x pattern entry
mm/pagewalk.c: prevent positive return value of walk_page_test() from being passed to callers
mm: fix anon_vma->degree underflow in anon_vma endless growing prevention
drivers/rtc/rtc-mrst: fix suspend/resume
aoe: update aoe maintainer information
Mel Gorman [Wed, 25 Mar 2015 22:55:45 +0000 (15:55 -0700)]
mm: numa: mark huge PTEs young when clearing NUMA hinting faults
Base PTEs are marked young when the NUMA hinting information is cleared
but the same does not happen for huge pages which this patch addresses.
Note that migrated pages are not marked young as the base page migration
code does not assume that migrated pages have been referenced. This
could be addressed but beyond the scope of this series which is aimed at
Dave Chinners shrink workload that is unlikely to be affected by this
issue.
Mel Gorman [Wed, 25 Mar 2015 22:55:42 +0000 (15:55 -0700)]
mm: numa: slow PTE scan rate if migration failures occur
Dave Chinner reported the following on https://lkml.org/lkml/2015/3/1/226
Across the board the 4.0-rc1 numbers are much slower, and the degradation
is far worse when using the large memory footprint configs. Perf points
straight at the cause - this is from 4.0-rc1 on the "-o bhash=101073" config:
This is showing excessive migration activity even though excessive
migrations are meant to get throttled. Normally, the scan rate is tuned
on a per-task basis depending on the locality of faults. However, if
migrations fail for any reason then the PTE scanner may scan faster if
the faults continue to be remote. This means there is higher system CPU
overhead and fault trapping at exactly the time we know that migrations
cannot happen. This patch tracks when migration failures occur and
slows the PTE scanner.
Signed-off-by: Mel Gorman <mgorman@suse.de> Reported-by: Dave Chinner <david@fromorbit.com> Tested-by: Dave Chinner <david@fromorbit.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mel Gorman [Wed, 25 Mar 2015 22:55:40 +0000 (15:55 -0700)]
mm: numa: preserve PTE write permissions across a NUMA hinting fault
Protecting a PTE to trap a NUMA hinting fault clears the writable bit
and further faults are needed after trapping a NUMA hinting fault to set
the writable bit again. This patch preserves the writable bit when
trapping NUMA hinting faults. The impact is obvious from the number of
minor faults trapped during the basis balancing benchmark and the system
CPU usage;
autonumabench
4.0.0-rc4 4.0.0-rc4
baseline preserve
Time System-NUMA01 107.13 ( 0.00%) 103.13 ( 3.73%)
Time System-NUMA01_THEADLOCAL 131.87 ( 0.00%) 83.30 ( 36.83%)
Time System-NUMA02 8.95 ( 0.00%) 10.72 (-19.78%)
Time System-NUMA02_SMT 4.57 ( 0.00%) 3.99 ( 12.69%)
Time Elapsed-NUMA01 515.78 ( 0.00%) 517.26 ( -0.29%)
Time Elapsed-NUMA01_THEADLOCAL 384.10 ( 0.00%) 384.31 ( -0.05%)
Time Elapsed-NUMA02 48.86 ( 0.00%) 48.78 ( 0.16%)
Time Elapsed-NUMA02_SMT 47.98 ( 0.00%) 48.12 ( -0.29%)
4.0.0-rc4 4.0.0-rc4
baseline preserve
User 44383.95 43971.89
System 252.61 201.24
Elapsed 998.68 1000.94
The patch looks hacky but the alternatives looked worse. The tidest was
to rewalk the page tables after a hinting fault but it was more complex
than this approach and the performance was worse. It's not generally
safe to just mark the page writable during the fault if it's a write
fault as it may have been read-only for COW so that approach was
discarded.
Signed-off-by: Mel Gorman <mgorman@suse.de> Reported-by: Dave Chinner <david@fromorbit.com> Tested-by: Dave Chinner <david@fromorbit.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mel Gorman [Wed, 25 Mar 2015 22:55:37 +0000 (15:55 -0700)]
mm: numa: group related processes based on VMA flags instead of page table flags
These are three follow-on patches based on the xfsrepair workload Dave
Chinner reported was problematic in 4.0-rc1 due to changes in page table
management -- https://lkml.org/lkml/2015/3/1/226.
Much of the problem was reduced by commit 53da3bc2ba9e ("mm: fix up numa
read-only thread grouping logic") and commit ba68bc0115eb ("mm: thp:
Return the correct value for change_huge_pmd"). It was known that the
performance in 3.19 was still better even if is far less safe. This
series aims to restore the performance without compromising on safety.
For the test of this mail, I'm comparing 3.19 against 4.0-rc4 and the
three patches applied on top
Overall the system CPU usage is comparable and the test is naturally a
bit variable. The slowing of the scanner hurts numa01 but on this
machine it is an adverse workload and patches that dramatically help it
often hurt absolutely everything else.
Due to patch 2, the fault activity is interesting
3.19.0 4.0.0-rc4 4.0.0-rc4 4.0.0-rc4 4.0.0-rc4
vanilla vanillavmwrite-v5r8preserve-v5r8slowscan-v5r8
Minor Faults 20978112656646259724919812301636841
Major Faults 362 450 365 364 365
Note the impact preserving the write bit across protection updates and
fault reduces faults.
Here the impact of slowing the PTE scanner on migratrion failures is
obvious as "NUMA base PTE updates" and "NUMA huge PMD updates" are
massively reduced even though the headline performance is very similar.
As xfsrepair was the reported workload here is the impact of the series
on it.
The really relevant lines as syst-xfsrepair which is the system CPU
usage when running xfsrepair. Note that on my machine the overhead was
45% higher on 4.0-rc4 which may be part of what Dave is seeing. Once we
preserve the write bit across faults, it's only 2.51% higher on average.
With the full series applied, system CPU usage is 24.6% lower on
average.
Again, the impact of preserving the write bit on minor faults is obvious
and the impact of slowing scanning after migration failures is obvious
on the PTE updates. Note also that the number of pages migrated is much
reduced even though the headline performance is comparable.
3.19.0 4.0.0-rc4 4.0.0-rc4 4.0.0-rc4 4.0.0-rc4
vanilla vanillavmwrite-v5r8preserve-v5r8slowscan-v5r8
Mean sdb-avgqusz 13.47 2.54 2.55 2.47 2.49
Mean sdb-avgrqsz 202.32 140.22 139.50 139.02 138.12
Mean sdb-await 25.92 5.09 5.33 5.02 5.22
Mean sdb-r_await 4.71 0.19 0.83 0.51 0.11
Mean sdb-w_await 104.13 5.21 5.38 5.05 5.32
Mean sdb-svctm 0.59 0.13 0.14 0.13 0.14
Mean sdb-rrqm 0.16 0.00 0.00 0.00 0.00
Mean sdb-wrqm 3.59 1799.43 1826.84 1812.21 1785.67
Max sdb-avgqusz 111.06 12.13 14.05 11.66 15.60
Max sdb-avgrqsz 255.60 190.34 190.01 187.33 191.78
Max sdb-await 168.24 39.28 49.22 44.64 65.62
Max sdb-r_await 660.00 52.00 280.00 76.00 12.00
Max sdb-w_await 7804.00 39.28 49.22 44.64 65.62
Max sdb-svctm 4.00 2.82 2.86 1.98 2.84
Max sdb-rrqm 8.30 0.00 0.00 0.00 0.00
Max sdb-wrqm 34.20 5372.80 5278.60 5386.60 5546.15
FWIW, I also checked SPECjbb in different configurations but it's
similar observations -- minor faults lower, PTE update activity lower
and performance is roughly comparable against 3.19.
This patch (of 3):
Threads that share writable data within pages are grouped together as
related tasks. This decision is based on whether the PTE is marked
dirty which is subject to timing races between the PTE scanner update
and when the application writes the page. If the page is file-backed,
then background flushes and sync also affect placement. This is
unpredictable behaviour which is impossible to reason about so this
patch makes grouping decisions based on the VMA flags.
Signed-off-by: Mel Gorman <mgorman@suse.de> Reported-by: Dave Chinner <david@fromorbit.com> Tested-by: Dave Chinner <david@fromorbit.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergei Antonov [Wed, 25 Mar 2015 22:55:34 +0000 (15:55 -0700)]
hfsplus: fix B-tree corruption after insertion at position 0
Fix B-tree corruption when a new record is inserted at position 0 in the
node in hfs_brec_insert(). In this case a hfs_brec_update_parent() is
called to update the parent index node (if exists) and it is passed
hfs_find_data with a search_key containing a newly inserted key instead
of the key to be updated. This results in an inconsistent index node.
The bug reproduces on my machine after an extents overflow record for
the catalog file (CNID=4) is inserted into the extents overflow B-tree.
Because of a low (reserved) value of CNID=4, it has to become the first
record in the first leaf node.
A change in hfs_brec_insert() makes hfs_brec_update_parent() work
correctly by preventing it from getting fd->record=-1 value from
__hfs_brec_find().
Along the way, I removed duplicate code with unification of the if
condition. The resulting code is equivalent to the original code
because node is never 0.
Also hfs_brec_update_parent() will now return an error after getting a
negative fd->record value. However, the return value of
hfs_brec_update_parent() is not checked anywhere in the file and I'm
leaving it unchanged by this patch. brec.c lacks error checking after
some other calls too, but this issue is of less importance than the one
being fixed by this patch.
Signed-off-by: Sergei Antonov <saproj@gmail.com> Cc: Joe Perches <joe@perches.com> Reviewed-by: Vyacheslav Dubeyko <slava@dubeyko.com> Acked-by: Hin-Tak Leung <htl10@users.sourceforge.net> Cc: Anton Altaparmakov <aia21@cam.ac.uk> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@infradead.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Taesoo Kim [Wed, 25 Mar 2015 22:55:29 +0000 (15:55 -0700)]
fs/affs/file.c: unlock/release page on error
When affs_bread_ino() fails, correctly unlock the page and release the
page cache with proper error value. All write_end() should
unlock/release the page that was locked by write_beg().
Signed-off-by: Taesoo Kim <tsgatesv@gmail.com> Cc: Fabian Frederick <fabf@skynet.be> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Jan Kara <jack@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Laura Abbott [Wed, 25 Mar 2015 22:55:26 +0000 (15:55 -0700)]
mm/page_alloc.c: call kernel_map_pages in unset_migrateype_isolate
Commit 3c605096d315 ("mm/page_alloc: restrict max order of merging on
isolated pageblock") changed the logic of unset_migratetype_isolate to
check the buddy allocator and explicitly call __free_pages to merge.
The page that is being freed in this path never had prep_new_page called
so set_page_refcounted is called explicitly but there is no call to
kernel_map_pages. With the default kernel_map_pages this is mostly
harmless but if kernel_map_pages does any manipulation of the page
tables (unmapping or setting pages to read only) this may trigger a
fault:
Mark Rutland [Wed, 25 Mar 2015 22:55:23 +0000 (15:55 -0700)]
mm/slub: fix lockups on PREEMPT && !SMP kernels
Commit 9aabf810a67c ("mm/slub: optimize alloc/free fastpath by removing
preemption on/off") introduced an occasional hang for kernels built with
CONFIG_PREEMPT && !CONFIG_SMP.
The problem is the following loop the patch introduced to
slab_alloc_node and slab_free:
do {
tid = this_cpu_read(s->cpu_slab->tid);
c = raw_cpu_ptr(s->cpu_slab);
} while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid));
GCC 4.9 has been observed to hoist the load of c and c->tid above the
loop for !SMP kernels (as in this case raw_cpu_ptr(x) is compile-time
constant and does not force a reload). On arm64 the generated assembly
looks like:
If the thread is preempted between the load of c->tid (into x1) and tid
(into x4), and an allocation or free occurs in another thread (bumping
the cpu_slab's tid), the thread will be stuck in the loop until
s->cpu_slab->tid wraps, which may be forever in the absence of
allocations/frees on the same CPU.
This patch changes the loop condition to access c->tid with READ_ONCE.
This ensures that the value is reloaded even when the compiler would
otherwise assume it could cache the value, and also ensures that the
load will not be torn.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Jesper Dangaard Brouer <brouer@redhat.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The cause is the "memset(pgdat, 0, sizeof(*pgdat))" at the end of
try_offline_node, which will reset all the content of pgdat to 0, as the
pgdat is accessed lock-free, so that the users still using the pgdat
will panic, such as the vmstat_update routine.
process A: offline node XX:
vmstat_updat()
refresh_cpu_vm_stats()
for_each_populated_zone()
find online node XX
cond_resched()
offline cpu and memory, then try_offline_node()
node_set_offline(nid), and memset(pgdat, 0, sizeof(*pgdat))
zone = next_zone(zone)
pg_data_t *pgdat = zone->zone_pgdat; // here pgdat is NULL now
next_online_pgdat(pgdat)
next_online_node(pgdat->node_id); // NULL pointer access
So the solution here is postponing the reset of obsolete pgdat from
try_offline_node() to hotadd_new_pgdat(), and just resetting
pgdat->nr_zones and pgdat->classzone_idx to be 0 rather than the memset
0 to avoid breaking pointer information in pgdat.
Naoya Horiguchi [Wed, 25 Mar 2015 22:55:14 +0000 (15:55 -0700)]
mm/pagewalk.c: prevent positive return value of walk_page_test() from being passed to callers
walk_page_test() is purely pagewalk's internal stuff, and its positive
return values are not intended to be passed to the callers of pagewalk.
However, in the current code if the last vma in the do-while loop in
walk_page_range() happens to return a positive value, it leaks outside
walk_page_range(). So the user visible effect is invalid/unexpected
return value (according to the reporter, mbind() causes it.)
This patch fixes it simply by reinitializing the return value after
checked.
Another exposed interface, walk_page_vma(), already returns 0 for such
cases so no problem.
Leon Yu [Wed, 25 Mar 2015 22:55:11 +0000 (15:55 -0700)]
mm: fix anon_vma->degree underflow in anon_vma endless growing prevention
I have constantly stumbled upon "kernel BUG at mm/rmap.c:399!" after
upgrading to 3.19 and had no luck with 4.0-rc1 neither.
So, after looking into new logic introduced by commit 7a3ef208e662 ("mm:
prevent endless growth of anon_vma hierarchy"), I found chances are that
unlink_anon_vmas() is called without incrementing dst->anon_vma->degree
in anon_vma_clone() due to allocation failure. If dst->anon_vma is not
NULL in error path, its degree will be incorrectly decremented in
unlink_anon_vmas() and eventually underflow when exiting as a result of
another call to unlink_anon_vmas(). That's how "kernel BUG at
mm/rmap.c:399!" is triggered for me.
This patch fixes the underflow by dropping dst->anon_vma when allocation
fails. It's safe to do so regardless of original value of dst->anon_vma
because dst->anon_vma doesn't have valid meaning if anon_vma_clone()
fails. Besides, callers don't care dst->anon_vma in such case neither.
Also suggested by Michal Hocko, we can clean up vma_adjust() a bit as
anon_vma_clone() now does the work.
[akpm@linux-foundation.org: tweak comment] Fixes: 7a3ef208e662 ("mm: prevent endless growth of anon_vma hierarchy") Signed-off-by: Leon Yu <chianglungyu@gmail.com> Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com> Reviewed-by: Michal Hocko <mhocko@suse.cz> Acked-by: Rik van Riel <riel@redhat.com> Acked-by: David Rientjes <rientjes@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The Moorestown RTC driver implements suspend and resume callbacks and
assigns them to the suspend and resume fields of the device_driver
struct. These callbacks are never actually called by anything though.
Modify the driver to properly use dev_pm_ops so that the suspend and
resume functions are actually executed upon suspend/resume.
Ed Cashin [Wed, 25 Mar 2015 22:55:06 +0000 (15:55 -0700)]
aoe: update aoe maintainer information
The coraid.com email address is defunct. The old aoe support area hosted
at coraid.com is no longer up. These changes update the email and website
to current ones.
Signed-off-by: Ed Cashin <ed.cashin@acm.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Wed, 25 Mar 2015 22:40:21 +0000 (15:40 -0700)]
Merge branch 'for-linus' of git://git.kernel.dk/linux-block
Pull block layer fixes from Jens Axboe:
"A small collection of fixes that has been gathered over the last few
weeks. This contains:
- A one-liner fix for NVMe, fixing a missing list_head init that
could makes us oops on hitting recovery at load time.
- Two small blk-mq fixes:
- Fixup a bad goto jump on error handling.
- Fix for oopsing if running out of reserved tags.
- A memory leak fix for NBD.
- Two small writeback fixes from Tejun, fixing a missing init to
INITIAL_JIFFIES, and a possible underflow introduced recently.
- A core merge fixup in sg gap detection, where rq->biotail was
indexed with the count of rq->bio"
* 'for-linus' of git://git.kernel.dk/linux-block:
writeback: fix possible underflow in write bandwidth calculation
NVMe: Initialize device list head before starting
Fix bug in blk_rq_merge_ok
blkmq: Fix NULL pointer deref when all reserved tags in
blk-mq: fix use of incorrect goto label in blk_mq_init_queue error path
nbd: fix possible memory leak
writeback: add missing INITIAL_JIFFIES init in global_update_bandwidth()
Heiko Carstens [Sat, 21 Mar 2015 11:43:08 +0000 (12:43 +0100)]
s390/smp: reenable smt after resume
After a suspend/resume cycle we missed to enable smt again, which leads
to all sorts of bugs, since the kernel assumes smt is enabled, while the
hardware thinks it is not.
Reported-and-tested-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reported-by: Stefan Haberland <stefan.haberland@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Linus Torvalds [Wed, 25 Mar 2015 00:27:18 +0000 (17:27 -0700)]
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull two arm64 fixes from Catalin Marinas:
- switch_mm() fix where init_mm.pgd ends up in the user TTBR0;
swapper_pg_dir is not suitable for user mappings
- this_cpu accessors fix for preemption safety
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: percpu: Make this_cpu accessors pre-empt safe
arm64: Use the reserved TTBR0 if context switching to the init_mm
Linus Torvalds [Wed, 25 Mar 2015 00:23:03 +0000 (17:23 -0700)]
Merge tag 'powerpc-4.0-3' of git://git.kernel.org/pub/scm/linux/kernel/git/mpe/linux
Pull powerpc fixes from Michael Ellerman:
- Fix the MCE code to use CONFIG_KVM_BOOK3S_64_HANDLER
- Little endian fixes for post mobility device tree update
- Add PVR for POWER8NVL processor
- Fixes for hypervisor doorbell handling
* tag 'powerpc-4.0-3' of git://git.kernel.org/pub/scm/linux/kernel/git/mpe/linux:
powerpc/book3s: Fix the MCE code to use CONFIG_KVM_BOOK3S_64_HANDLER
powerpc/pseries: Little endian fixes for post mobility device tree update
powerpc: Add PVR for POWER8NVL processor
powerpc/powernv: Fixes for hypervisor doorbell handling
Linus Torvalds [Wed, 25 Mar 2015 00:08:29 +0000 (17:08 -0700)]
Merge branch 'for-4.0-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata
Pull libata fix from Tejun Heo:
"One patch to fix a regression from the recent switch to blk-mq tag
allocation which can cause oops on SAS-attached SATA drives"
* 'for-4.0-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata:
ata: Add a new flag to destinguish sas controller
Linus Torvalds [Wed, 25 Mar 2015 00:02:45 +0000 (17:02 -0700)]
Merge tag 'mfd-fixes-4.0' of git://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd
Pull MFD fixes from Lee Jones:
- Use DMA'able addresses for DMA; rtsx_usb
- Use return value in the correct way; kempld-core
* tag 'mfd-fixes-4.0' of git://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd:
mfd: kempld-core: Fix callback return value check
mfd: rtsx_usb: Prevent DMA from stack
drm/i915: Ensure plane->state->fb stays in sync with plane->fb
v2: Don't move update_state_fb(). It was moved around because I
originally put update_state_fb() in intel_alloc_plane_obj() before
finding a better place. (Matt)
Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
From drm-next:
(cherry picked from commit f55548b5af87ebfc586ca75748947f1c1b1a4a52) Signed-off-by: Dave Airlie <airlied@redhat.com>
Linus Torvalds [Tue, 24 Mar 2015 23:58:29 +0000 (16:58 -0700)]
Merge tag 'spi-v4.0-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi
Pull spi fixes from Mark Brown:
"A couple of driver specific fixes of the usual "important if you have
that device" kind together with a fix for a use after free bug that
was introduced into the trace code in some of the recent refactoring
of the message queue handling"
* tag 'spi-v4.0-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi:
spi: trigger trace event for message-done before mesg->complete
spi: dw-mid: clear BUSY flag fist and test other one
spi: qup: Fix cs-num DT property parsing
Linus Torvalds [Tue, 24 Mar 2015 23:51:42 +0000 (16:51 -0700)]
Merge tag 'regulator-fix-v4.0-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator
Pull regulator fixes from Mark Brown:
"Two fixes here, one typo fix in the documentation and one fix for a
system hang with one of the Palmas chips caused by the use of an
incorrect offset being provided for one of the registers"
* tag 'regulator-fix-v4.0-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator:
regulator: Fix documentation for regmap in the config
regulator: palmas: Correct TPS659038 register definition for REGEN2
Linus Torvalds [Tue, 24 Mar 2015 23:42:54 +0000 (16:42 -0700)]
Merge tag 'regmap-fix-v4.0-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap
Pull regmap fix from Mark Brown:
"This patch fixes a bad interaction between the support that was added
for having regmaps without devices for early system controller
initialization and the trace support.
There's a very good analysis of the actual issue in the commit message
for the change"
* tag 'regmap-fix-v4.0-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
regmap: introduce regmap_name to fix syscon regmap trace events
Steve Capper [Sun, 22 Mar 2015 14:51:51 +0000 (14:51 +0000)]
arm64: percpu: Make this_cpu accessors pre-empt safe
this_cpu operations were implemented for arm64 in: 5284e1b arm64: xchg: Implement cmpxchg_double f97fc81 arm64: percpu: Implement this_cpu operations
Unfortunately, it is possible for pre-emption to take place between
address generation and data access. This can lead to cases where data
is being manipulated by this_cpu for a different CPU than it was
called on. Which effectively breaks the spec.
This patch disables pre-emption for the this_cpu operations
guaranteeing that address generation and data manipulation take place
without a pre-emption in-between.
Fixes: 5284e1b4bc8a ("arm64: xchg: Implement cmpxchg_double") Fixes: f97fc810798c ("arm64: percpu: Implement this_cpu operations") Reported-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Steve Capper <steve.capper@linaro.org>
[catalin.marinas@arm.com: remove space after type cast] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
tools lib traceevent: Free filter tokens in process_filter()
valgrind showed that the filter token wasn't being freed properly in
process_filter().
Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/20150324135923.817723903@goodmis.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
tools lib traceevent: Add way to find sub buffer boundary
For debugging purposes, it may be helpful for the kbuffer library to flag
when crossing a sub buffer.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/20150324135923.650983637@goodmis.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
tools lib traceevent kbuffer: Remove extra update to data pointer in PADDING
When a event PADDING is hit (a deleted event that is still in the ring
buffer), translate_data() sets the length of the padding and also updates
the data pointer which is passed back to the caller.
This is unneeded because the caller also updates the data pointer with
the passed back length. translate_data() should not update the pointer,
only set the length.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: stable@vger.kernel.org # 3.12+ Link: http://lkml.kernel.org/r/20150324135923.461431960@goodmis.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Steven Rostedt [Tue, 24 Mar 2015 13:57:54 +0000 (09:57 -0400)]
tools lib traceevent: Make plugin options either string or boolean
When a plugin option is defined, by default it is a boolean (true or false).
If the option is something else, then it needs to set its "value" field to
a default string other than NULL (can be just "").
If the value is not set then the option is considered boolean, and the
updating of the option value will be handled accordingly.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/20150324135923.308372986@goodmis.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
There is a pevent_data_comm_from_pid() that returns the cmdline stored for
a given pid in order for users to map pids to comms, but there's no method
to convert a comm back to a pid. This is useful for filters that specify
a comm instead of a PID (it's faster than searching each individual event).
Add a way to retrieve a comm from a pid. Since there can be more than one
pid associated to a comm, it returns a data structure that lets the user
iterate over all the saved comms for a given pid.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/20150324135923.001103479@goodmis.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
tools lib traceevent: Copy trace_clock and free it
The pevent->trace_clock should not be a direct pointer to what was
given. It should be copied and freed.
Note, valgrind pointed this out when a caller passed in a pointer that
needed to be freed and it never was. Ideally, pevent should copy it
(which this change does), and free the copy. It's up to the caller to
free the clock string passed in.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/20150324135922.695906738@goodmis.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf symbols: Save DSO loading errno to better report errors
Before, when some problem happened while trying to load the kernel
symtab, 'perf top' would show:
┌─Warning:───────────────────────────┐
│The vmlinux file can't be used. │
│Kernel samples will not be resolved.│
│ │
│ │
│Press any key... │
└────────────────────────────────────┘
Now, it reports:
# perf top --vmlinux /dev/null
┌─Warning:───────────────────────────────────────────┐
│The /tmp/passwd file can't be used: Invalid ELF file│
│Kernel samples will not be resolved. │
│ │
│ │
│Press any key... │
└────────────────────────────────────────────────────┘
This is possible because we now register the reason for not being able
to load the symtab in the dso->load_errno member, and provide a
dso__strerror_load() routine to format this error into a strerror like
string with a short reason for the error while loading.
That can be just forwarding the dso__strerror_load() call to
strerror_r(), or, for a separate errno range providing a custom message.
Reported-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-u5rb5uq63xqhkfb8uv2lxd5u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf target: Simplify handling of strerror_r return
To deal with forwarding the strerror_r (GNU) return we need to check if
the returned value is the buffer we passed or maybe some constant
(unknown error), simplify that action by using scnprintf, that will do
all the buflen size checks, trimming if needed.
Acked-by: Jiri Olsa <jolsa@redhat.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-d0ik6i5gjew56j0qphql28ou@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Vinson Lee [Mon, 23 Mar 2015 19:09:16 +0000 (12:09 -0700)]
perf tools: Work around lack of sched_getcpu in glibc < 2.6.
This patch fixes this build error with glibc < 2.6.
CC util/cloexec.o
cc1: warnings being treated as errors
util/cloexec.c: In function ‘perf_flag_probe’:
util/cloexec.c:24: error: implicit declaration of function
‘sched_getcpu’
util/cloexec.c:24: error: nested extern declaration of ‘sched_getcpu’
make: *** [util/cloexec.o] Error 1
Signed-off-by: Vinson Lee <vlee@twitter.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Yann Droneaud <ydroneaud@opteya.com> Cc: stable@vger.kernel.org # 3.18+ Link: http://lkml.kernel.org/r/1427137761-16119-1-git-send-email-vlee@twopensource.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Namhyung Kim [Mon, 23 Mar 2015 06:30:40 +0000 (15:30 +0900)]
perf kmem: Print big numbers using thousands' group
Like perf stat, this makes easy to read the numbers on stat like below:
# perf kmem stat
SUMMARY
=======
Total bytes requested: 9,770,900
Total bytes allocated: 9,782,712
Total bytes wasted on internal fragmentation: 11,812
Internal fragmentation: 0.120744%
Cross CPU allocations: 74/152,819
Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1427092244-22764-2-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Javi Merino [Fri, 20 Mar 2015 18:12:55 +0000 (18:12 +0000)]
tools lib traceevent: Factor out allocating and processing args
The sequence of allocating the print_arg field, calling process_arg()
and verifying that the next event delimiter is repeated twice in
process_hex() and will also be used for process_int_array().
Factor it out to a function to avoid writing the same code again and
again.
Signed-off-by: Javi Merino <javi.merino@arm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lkml.kernel.org/r/1426875176-30244-2-git-send-email-javi.merino@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Masami Hiramatsu [Sun, 22 Mar 2015 11:40:22 +0000 (20:40 +0900)]
perf probe: Fix to get ummapped symbol address on kernel
Fix to get correctly unmapped symbol address on kernel. This allows us
to probe on syscall symbols which are aliases of SyS_ functions with
using debuginfo.
Without this fix:
----
# ./perf probe -a sys_write
Failed to find debug information for address 3b0100
Probe point 'sys_write' not found.
Error: Failed to add events.
----
The address 0x3b0100 is a mapped address, and not usable
in debuginfo.
With this fix:
----
# ./perf probe -a sys_write
Added new event:
probe:sys_write (on sys_write)
You can now use it in all perf tools, such as:
perf record -e probe:sys_write -aR sleep 1
----
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: He Kuang <hekuang@huawei.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/20150322114022.32639.19096.stgit@localhost.localdomain Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Yunlong Song [Mon, 23 Mar 2015 03:50:05 +0000 (11:50 +0800)]
perf tools: Remove (null) value of "Sort order" for perf mem report
When '--sort' is not set, 'perf mem report" will print a null pointer as
the output value of sort order, so fix it.
Example:
Before this patch:
$ perf mem report
# To display the perf.data header info, please use --header/--header-only options.
#
# Samples: 18 of event 'cpu/mem-loads/pp'
# Total weight : 188
# Sort order : (null)
#
...
After this patch:
$ perf mem report
# To display the perf.data header info, please use --header/--header-only options.
#
# Samples: 18 of event 'cpu/mem-loads/pp'
# Total weight : 188
# Sort order : local_weight,mem,sym,dso,symbol_daddr,dso_daddr,snoop,tlb,locked
#
...
Signed-off-by: Yunlong Song <yunlong.song@huawei.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1427082605-12881-1-git-send-email-yunlong.song@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
we've switched to weak references, broke that assumption but forgot to
fix it up.
Since we still force-disable planes it's only possible to hit this
when racing multiple rmfb with fbdev restoring or similar evil things.
As long as userspace is nice it's impossible to hit the BUG_ON.
But the BUG_ON would most likely be hit from fbdev code, which usually
invovles the console_lock besides all modeset locks. So very likely
we'd never get the bug reports if this was hit in the wild, hence
better be safe than sorry and backport.
Spotted by Matt Roper while reviewing other patches.
[airlied: pull this back into 4.0 - the oops happens there]
Cc: stable@vger.kernel.org Cc: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
Failure happens when attempting to allocate pages for
'struct kvm_memslots', however it doesn't have to be
present in physically contiguous (kmalloc-ed) address
space, change allocation to kvm_kvzalloc() so that
it will be vmalloc-ed when its size is more then a page.
Signed-off-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Mike Snitzer [Mon, 23 Mar 2015 21:01:43 +0000 (17:01 -0400)]
dm: fix add_disk() NULL pointer due to race with free_dev()
Commit c4db59d31e39 ("fs: don't reassign dirty inodes to
default_backing_dev_info") exposed DM to a latent race in free_dev() vs
add_disk() in relation to management of the device's minor number.
Fix this by refactoring free_dev() to match cleanup order of the
alloc_dev() error path. Move cleanup of the gendisk, queue, and bdev
to _before_ the cleanup of the idr managed minor number.
Also, purely due to cleanup that fell out during the free_dev() audit:
- adjust dm_blk_close() to access the gendisk's private_data under
the _minor_lock spinlock.
- move __dm_destroy()'s dm_get_live_table() call out from under the
_minor_lock spinlock.
Catalin Marinas [Mon, 23 Mar 2015 15:06:50 +0000 (15:06 +0000)]
arm64: Use the reserved TTBR0 if context switching to the init_mm
The idle_task_exit() function may call switch_mm() with next ==
&init_mm. On arm64, init_mm.pgd cannot be used for user mappings, so
this patch simply sets the reserved TTBR0.
Cc: <stable@vger.kernel.org> Reported-by: Jon Medhurst (Tixy) <tixy@linaro.org> Tested-by: Jon Medhurst (Tixy) <tixy@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
1) Validate iov ranges before feeding them into iov_iter_init(), from
Al Viro.
2) We changed copy_from_msghdr_from_user() to zero out the msg_namelen
is a NULL pointer is given for the msg_name. Do the same in the
compat code too. From Catalin Marinas.
3) Fix partially initialized tuples in netfilter conntrack helper, from
Ian Wilson.
4) Missing continue; statement in nft_hash walker can lead to crashes,
from Herbert Xu.
5) tproxy_tg6_check looks for IP6T_INV_PROTO in ->flags instead of
->invflags, fix from Pablo Neira Ayuso.
6) Incorrect memory account of TCP FINs can result in negative socket
memory accounting values. Fix from Josh Hunt.
7) Don't allow virtual functions to enable VLAN promiscuous mode in
be2net driver, from Vasundhara Volam.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net:
netfilter: nft_compat: set IP6T_F_PROTO flag if protocol is set
cx82310_eth: wait for firmware to become ready
net: validate the range we feed to iov_iter_init() in sys_sendto/sys_recvfrom
net: compat: Update get_compat_msghdr() to match copy_msghdr_from_user() behaviour
be2net: use PCI MMIO read instead of config read for errors
be2net: restrict MODIFY_EQ_DELAY cmd to a max of 8 EQs
be2net: Prevent VFs from enabling VLAN promiscuous mode
tcp: fix tcp fin memory accounting
ipv6: fix backtracking for throw routes
net: ethernet: pcnet32: Setup the SRAM and NOUFLO on Am79C97{3, 5}
ipv6: call ipv6_proxy_select_ident instead of ipv6_select_ident in udp6_ufo_fragment
netfilter: xt_TPROXY: fix invflags check in tproxy_tg6_check()
netfilter: restore rule tracing via nfnetlink_log
netfilter: nf_tables: allow to change chain policy without hook if it exists
netfilter: Fix potential crash in nft_hash walker
netfilter: Zero the tuple in nfnl_cthelper_parse_tuple()
David S. Miller [Mon, 23 Mar 2015 16:22:10 +0000 (09:22 -0700)]
sparc64: Fix several bugs in memmove().
Firstly, handle zero length calls properly. Believe it or not there
are a few of these happening during early boot.
Next, we can't just drop to a memcpy() call in the forward copy case
where dst <= src. The reason is that the cache initializing stores
used in the Niagara memcpy() implementations can end up clearing out
cache lines before we've sourced their original contents completely.
For example, considering NG4memcpy, the main unrolled loop begins like
this:
Assume dst is 64 byte aligned and let's say that dst is src - 8 for
this memcpy() call. That store at the end there is the one to the
first line in the cache line, thus clearing the whole line, which thus
clobbers "src + 0x28" before it even gets loaded.
To avoid this, just fall through to a simple copy only mildly
optimized for the case where src and dst are 8 byte aligned and the
length is a multiple of 8 as well. We could get fancy and call
GENmemcpy() but this is good enough for how this thing is actually
used.
Reported-by: David Ahern <david.ahern@oracle.com> Reported-by: Bob Picco <bpicco@meloft.net> Signed-off-by: David S. Miller <davem@davemloft.net>