Dave Hansen [Fri, 21 Jun 2013 15:51:36 +0000 (08:51 -0700)]
perf: Drop sample rate when sampling is too slow
This patch keeps track of how long perf's NMI handler is taking,
and also calculates how many samples perf can take a second. If
the sample length times the expected max number of samples
exceeds a configurable threshold, it drops the sample rate.
This way, we don't have a runaway sampling process eating up the
CPU.
This patch can tend to drop the sample rate down to level where
perf doesn't work very well. *BUT* the alternative is that my
system hangs because it spends all of its time handling NMIs.
I'll take a busted performance tool over an entire system that's
busted and undebuggable any day.
BTW, my suspicion is that there's still an underlying bug here.
Using the HPET instead of the TSC is definitely a contributing
factor, but I suspect there are some other things going on.
But, I can't go dig down on a bug like that with my machine
hanging all the time.
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus@samba.org Cc: acme@ghostprotocols.net Cc: Dave Hansen <dave@sr71.net>
[ Prettified it a bit. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
Dave Hansen [Fri, 21 Jun 2013 15:51:35 +0000 (08:51 -0700)]
x86: Warn when NMI handlers take large amounts of time
I have a system which is causing all kinds of problems. It has
8 NUMA nodes, and lots of cores that can fight over cachelines.
If things are not working _perfectly_, then NMIs can take longer
than expected.
If we get too many of them backed up to each other, we can
easily end up in a situation where we are doing nothing *but*
running NMIs. The biggest problem, though, is that this happens
_silently_. You might be lucky to get an hrtimer warning, but
most of the time system simply hangs.
This patch should at least give us some warning before we fall
off the cliff. the warnings look like this:
The message is triggered whenever we notice the longest NMI
we've seen to date. You can always view and reset this value
via the debugfs interface if you like.
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: paulus@samba.org Cc: acme@ghostprotocols.net Cc: Dave Hansen <dave@sr71.net> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Oleg Nesterov [Thu, 20 Jun 2013 15:50:20 +0000 (17:50 +0200)]
hw_breakpoint: Introduce "struct bp_cpuinfo"
This patch simply moves all per-cpu variables into the new
single per-cpu "struct bp_cpuinfo".
To me this looks more logical and clean, but this can also
simplify the further potential changes. In particular, I do not
think this memory should be per-cpu, it is never used "locally".
After this change it is trivial to turn it into, say,
bootmem[nr_cpu_ids].
Oleg Nesterov [Thu, 20 Jun 2013 15:50:13 +0000 (17:50 +0200)]
hw_breakpoint: Simplify the "weight" usage in toggle_bp_slot() paths
Change toggle_bp_slot() to make "weight" negative if !enable.
This way we can always use "+ weight" without additional "if
(enable)" check and toggle_bp_task_slot() no longer needs this
arg.
Oleg Nesterov [Thu, 20 Jun 2013 15:50:11 +0000 (17:50 +0200)]
hw_breakpoint: Simplify list/idx mess in toggle_bp_slot() paths
The enable/disable logic in toggle_bp_slot() is not symmetrical
and imho very confusing. "old_count" in toggle_bp_task_slot() is
actually new_count because this bp was already removed from the
list.
Change toggle_bp_slot() to always call list_add/list_del after
toggle_bp_task_slot(). This way old_idx is task_bp_pinned() and
this entry should be decremented, new_idx is +/-weight and we
need to increment this element. The code/logic looks obvious.
Oleg Nesterov [Thu, 20 Jun 2013 15:50:09 +0000 (17:50 +0200)]
hw_breakpoint: Use cpu_possible_mask in {reserve,release}_bp_slot()
fetch_bp_busy_slots() and toggle_bp_slot() use
for_each_online_cpu(), this is obviously wrong wrt cpu_up() or
cpu_down(), we can over/under account the per-cpu numbers.
Oleg Nesterov [Thu, 20 Jun 2013 15:50:06 +0000 (17:50 +0200)]
hw_breakpoint: Fix cpu check in task_bp_pinned(cpu)
trinity fuzzer triggered WARN_ONCE("Can't find any breakpoint
slot") in arch_install_hw_breakpoint() but the problem is not
arch-specific.
The problem is, task_bp_pinned(cpu) checks "cpu == iter->cpu"
but this doesn't account the "all cpus" events with iter->cpu <
0.
This means that, say, register_user_hw_breakpoint(tsk) can
happily create the arbitrary number > HBP_NUM of breakpoints
which can not be activated. toggle_bp_task_slot() is equally
wrong by the same reason and nr_task_bp_pinned[] can have
negative entries.
Simple test:
# perl -e 'sleep 1 while 1' &
# perf record -e mem:0x10,mem:0x10,mem:0x10,mem:0x10,mem:0x10 -p `pidof perl`
Before this patch this triggers the same problem/WARN_ON(),
after the patch it correctly fails with -ENOSPC.
kprobes: Fix arch_prepare_kprobe to handle copy insn failures
Fix arch_prepare_kprobe() to handle failures in copy instruction
correctly. This fix is related to the previous fix: 8101376
which made __copy_instruction return an error result if failed,
but caller site was not updated to handle it. Thus, this is the
other half of the bugfix.
This fix is also related to the following bug-report:
Andi Kleen [Tue, 18 Jun 2013 00:36:52 +0000 (17:36 -0700)]
perf/x86/intel: Add mem-loads/stores support for Haswell
mem-loads is basically the same as Sandy Bridge,
but we use a separate string for changes later.
Haswell doesn't support the full precise store mode,
so we emulate it using the "DataLA" facility.
This allows to do everything, but for data sources we
can only detect L1 hit or not.
There is no explicit enable bit anymore, so we have
to tie it to a perf internal only flag.
The address is supported for all memory related PEBS
events with DataLA. Instead of only logging for the
load and store events we allow logging it for all
(it will be simply 0 if the current event does not
support it)
Andi Kleen [Tue, 18 Jun 2013 00:36:51 +0000 (17:36 -0700)]
perf/x86/intel: Support Haswell/v4 LBR format
Haswell has two additional LBR from flags for TSX: in_tx and
abort_tx, implemented as a new "v4" version of the LBR format.
Handle those in and adjust the sign extension code to still
correctly extend. The flags are exported similarly in the LBR
record to the existing misprediction flag
Andi Kleen [Tue, 18 Jun 2013 00:36:50 +0000 (17:36 -0700)]
perf/x86/intel: Move NMI clearing to end of PMI handler
This avoids some problems with spurious PMIs on Haswell.
Haswell seems to behave more like P4 in this regard. Do
the same thing as the P4 perf handler by unmasking
the NMI only at the end. Shouldn't make any difference
for earlier family 6 cores.
(Tested on Haswell, IvyBridge, Westmere, Saltwell (Atom).)
Andi Kleen [Tue, 18 Jun 2013 00:36:47 +0000 (17:36 -0700)]
perf/x86/intel: Add Haswell PEBS record support
Add support for the Haswell extended (fmt2) PEBS format.
It has a superset of the nhm (fmt1) PEBS fields, but has a
longer record so we need to adjust the code paths.
The main advantage is the new "EventingRip" support which
directly gives the instruction, not off-by-one instruction. So
with precise == 2 we use that directly and don't try to use LBRs
and walking basic blocks. This lowers the overhead of using
precise significantly.
Implement a perf PMU to handle IOMMU performance counters and events.
The PMU only supports counting mode (e.g. perf stat). Since the counters
are shared across all cores, the PMU is implemented as "system-wide" mode.
To invoke the AMD IOMMU PMU, issue a perf tool command such as:
./perf stat -a -e amd_iommu/<events>/ <command>
or:
./perf stat -a -e amd_iommu/config=<config-data>,config1=<config1-data>/ <command>
For example:
./perf stat -a -e amd_iommu/mem_trans_total/ <command>
The resulting count will be how many IOMMU total peripheral memory
operations were performed during the command execution window.
Add functionality to check the availability of the AMD IOMMU Performance
Counters and export this functionality to other core drivers, such as in this
case, a perf AMD IOMMU PMU. This feature is not bound to any specific AMD
family/model other than the presence of the IOMMU with P-C enabled.
The AMD IOMMU P-C support static counting only at this time.
Dave Hansen [Thu, 30 May 2013 17:45:59 +0000 (10:45 -0700)]
perf/x86: Only print PMU state when also WARN()'ing
intel_pmu_handle_irq() has a warning in it if it does too many
loops. It is a WARN_ONCE(), but the perf_event_print_debug()
call beneath it is unconditional. For the first warning, you get
a nice backtrace and message, but subsequent ones just dump the
PMU state with no leading messages. I doubt this is what was
intended.
This patch will only print the PMU state when paired with the
WARN_ON() text. It effectively open-codes WARN_ONCE()'s
one-time-only logic.
My suspicion is that the code really just wants to make sure we
do not sit in the loop and spit out a warning for every loop
iteration after the 100th. From what I've seen, this is very
unlikely to happen since we also clear the PMU state.
After this patch, instead of seeing the PMU state dumped each
time, you will just see:
[57494.894540] perf_event_intel: clearing PMU state on CPU#129
[57579.539668] perf_event_intel: clearing PMU state on CPU#10
[57587.137762] perf_event_intel: clearing PMU state on CPU#134
[57623.039912] perf_event_intel: clearing PMU state on CPU#114
[57644.559943] perf_event_intel: clearing PMU state on CPU#118
...
Andrew Hunter [Thu, 23 May 2013 18:07:03 +0000 (11:07 -0700)]
perf/x86: Reduce stack usage of x86_schedule_events()
x86_schedule_events() caches event constraints on the stack during
scheduling. Given the number of possible events, this is 512 bytes of
stack; since it can be invoked under schedule() under god-knows-what,
this is causing stack blowouts.
Trade some space usage for stack safety: add a place to cache the
constraint pointer to struct perf_event. For 8 bytes per event (1% of
its size) we can save the giant stack frame.
This shouldn't change any aspect of scheduling whatsoever and while in
theory the locality's a tiny bit worse, I doubt we'll see any
performance impact either.
Tested: `perf stat whatever` does not blow up and produces
results that aren't hugely obviously wrong. I'm not sure how to run
particularly good tests of perf code, but this should not produce any
functional change whatsoever.
Commit 2b923c8 perf/x86: Check branch sampling priv level in generic code
was missing the check for the hypervisor (HV) priv level, so add it back.
With this patch, we get the following correct behavior:
# echo 2 >/proc/sys/kernel/perf_event_paranoid
$ perf record -j any,k noploop 1
Error:
You may not have permission to collect stats.
Consider tweaking /proc/sys/kernel/perf_event_paranoid:
-1 - Not paranoid at all
0 - Disallow raw tracepoint access for unpriv
1 - Disallow cpu events for unpriv
2 - Disallow kernel profiling for unpriv
$ perf record -j any,hv noploop 1
Error:
You may not have permission to collect stats.
Consider tweaking /proc/sys/kernel/perf_event_paranoid:
-1 - Not paranoid at all
0 - Disallow raw tracepoint access for unpriv
1 - Disallow cpu events for unpriv
2 - Disallow kernel profiling for unpriv
Signed-off-by: Stephane Eranian <eranian@google.com> Acked-by: Petr Matousek <pmatouse@redhat.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20130606090204.GA3725@quad Signed-off-by: Ingo Molnar <mingo@kernel.org>
perf/x86: Fix broken PEBS-LL support on SNB-EP/IVB-EP
This patch fixes broken support of PEBS-LL on SNB-EP/IVB-EP.
For some reason, the LDLAT extra reg definition for snb_ep
showed up as duplicate in the snb table.
This patch moves the definition of LDLAT back into the
snb_ep table.
Peter Zijlstra [Tue, 4 Jun 2013 08:44:21 +0000 (10:44 +0200)]
perf: Fix mmap() accounting hole
Vince's fuzzer once again found holes. This time it spotted a leak in
the locked page accounting.
When an event had redirected output and its close() was the last
reference to the buffer we didn't have a vm context to undo accounting.
Change the code to destroy the buffer on the last munmap() and detach
all redirected events at that time. This provides us the right context
to undo the vm accounting.
David Ahern [Sat, 25 May 2013 23:50:39 +0000 (17:50 -0600)]
perf evlist: Reset SIGTERM handler in workload child process
Jiri reported hanging perf tests on latest acme's perf/core and bisected
it to 87f303a9f:
[jolsa@krava2 perf]$ cat /proc/sys/kernel/perf_event_paranoid
1
[jolsa@krava2 perf]$ ./perf record -C 0 kill
Error:
You may not have permission to collect %sstats.
Consider tweaking /proc/sys/kernel/perf_event_paranoid:
-1 - Not paranoid at all
0 - Disallow raw tracepoint access for unpriv
1 - Disallow cpu events for unpriv
2 - Disallow kernel profiling for unpriv
Need to let default handling kickin for workload process.
Reported-by: Jiri Olsa <jolsa@redhat.com> Signed-off-by: David Ahern <dsahern@gmail.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Jiri Olsa <jolsa@redhat.com> Link: http://lkml.kernel.org/r/1369525839-1261-1-git-send-email-dsahern@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Michael Ellerman [Fri, 10 May 2013 15:33:00 +0000 (17:33 +0200)]
perf: Expand definition of sysfs format attribute
Make it explicit that the format attributes may define overlapping bit
ranges. Unfortunately this was left unspecified originally, and all the
examples show non-overlapping ranges. I don't believe this is an ABI
change, as we are defining something that was previously undefined, but
others may disagree.
The POWER8 PMU would like to define overlapping ranges, as bit ranges in
the event code have different meanings for certain events. It will also
allow us to define an overarching "event" field, that encompasses all
others.
As far as I can see perf is comfortable with this change, however I am
not sure if there are any other users of the interface.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1368199980-20283-1-git-send-email-jolsa@redhat.com Signed-off-by: Jiri Olsa <jolsa@redhat.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf: Power7: Make CPI stack events available in sysfs
A set of Power7 events are often used for Cycles Per Instruction (CPI) stack
analysis. Make these events available in sysfs (/sys/devices/cpu/events/) so
they can be identified using their symbolic names:
perf stat -e 'cpu/PM_CMPLU_STALL_DCACHE_MISS/' /bin/ls
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: linuxppc-dev@ozlabs.org Link: http://lkml.kernel.org/r/20130406164803.GA408@us.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Jiri Olsa [Fri, 24 May 2013 11:16:38 +0000 (13:16 +0200)]
perf tests: Fix exclude_guest|exclude_host checking for attr tests
We have a one of the event open fallback case in __perf_evsel__open
where we zero exclude_guest|exclude_host fields.
This means there's no way for attr tests to find out what's the right
value for those fields, so we need to check for both 0 and 1. Luckily we
still have other event parsing tests for those fields.
Signed-off-by: Jiri Olsa <jolsa@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1369394201-20044-3-git-send-email-jolsa@redhat.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Jiri Olsa [Fri, 29 Mar 2013 15:11:02 +0000 (16:11 +0100)]
perf tools: Add automated make test suite
Adding automated test for testing the build process.
To run it you need to be in perf directory or specify one with PERF
variable. It's also possible to specify optional Makefile to test via MK
variable.
Whole suite is executed twice, the second time with O=/tmp/xxx option
added.
To run the whole suite:
$ make -f tests/make
- make_pure: cd . && make -f Makefile
test: test -x ./perf
- make_clean_all: cd . && make -f Makefile clean all
test: test -x ./perf
- make_python_perf_so: cd . && make -f Makefile python/perf.so
test: test -f ./python/perf.so
- make_debug: cd . && make -f Makefile DEBUG=1
test: test -x ./perf
- make_no_libperl: cd . && make -f Makefile NO_LIBPERL=1
test: test -x ./perf
You see command line for 'make_pure' test right away, and the output is
stored into 'make_pure' file.
To run simple test:
$ make -f tests/make make_debug
- make_debug: cd . && make -f Makefile DEBUG=1
test: test -x ./perf
At this moment tests checks for successfull build and for existence of
several built files. Additional after-build checks could be added.
Signed-off-by: Jiri Olsa <jolsa@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1369398928-9809-2-git-send-email-jolsa@redhat.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Jiri Olsa [Thu, 13 Dec 2012 13:08:59 +0000 (14:08 +0100)]
perf diff: Use internal rb tree for hists__precompute
There's missing change for hists__precompute to iterate either
entries_collapsed or entries_in tree. The change was initiated
for hists_compute_resort function in commit:
66f97ed perf diff: Use internal rb tree for compute resort
but was missing for hists__precompute function changes.
Signed-off-by: Jiri Olsa <jolsa@redhat.com> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1355404152-16523-2-git-send-email-jolsa@redhat.com
[ committer note: Reduce patch size, no functional change ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Namhyung Kim [Tue, 14 May 2013 02:09:01 +0000 (11:09 +0900)]
perf top: Get rid of *_threaded() functions
Those _threaded() functions are needed to make hist tree handling
thread-safe, but AFAICS the only thing it does is forcing it to use
the intermediate 'collapsed' tree.
This can be acheived by setting sort__need_collapse to 1 in cmd_top() so
no need to keep those _threaded() variants.
Signed-off-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1368497347-9628-4-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Namhyung Kim [Tue, 14 May 2013 02:08:59 +0000 (11:08 +0900)]
perf top: Fix -E option behavior
The -E/--entries option controls how many lines to be printed on stdio
output but it doesn't work as it should be:
If -E option is specified, print that many lines regardless of current
window size, if not automatically adjust number of lines printed to fit
into the window size.
Reported-by: Minchan Kim <minchan@kernel.org> Tested-by: Jiri Olsa <jolsa@redhat.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1368497347-9628-2-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
David Ahern [Mon, 6 May 2013 18:24:23 +0000 (12:24 -0600)]
perf record: handle death by SIGTERM
Perf data files cannot be processed until the header is updated which is
done via an on_exit handler.
If perf is killed due to a SIGTERM it does not run the on_exit hooks
leaving the perf.data file in a random state which perf-report will
happily spin on trying to read.
As noted by Mike an easy reproducer is:
perf record -a -g & sleep 1; killall perf
Fix by catching SIGTERM like it does SIGINT.
Also need to remove the kill which was added via commit f7b7c26e.
Acked-by: Stephane Eranian <eranian@google.com> Signed-off-by: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1367864663-1309-1-git-send-email-dsahern@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Jiri Olsa [Wed, 24 Apr 2013 09:37:29 +0000 (11:37 +0200)]
perf tools: Fix tab vs spaces issue in Makefile ifdef/endif
Unmatched spaces/tabs Makefile indentation could make the
Makefile fails. While the tabed line could be considered
sometimes as follow up for rule command, the mixed space
tab meses up with makefile if conditions.
Signed-off-by: Jiri Olsa <jolsa@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1366796273-4780-3-git-send-email-jolsa@redhat.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Namhyung Kim [Fri, 5 Apr 2013 01:26:36 +0000 (10:26 +0900)]
perf sort: Cleanup sort__has_sym setting
The sort__has_sym variable is set only if a symbol-related sort key was
added. Since branch stack and memory sort dimensions are separated, it
doesn't need to be checked from common dimension.
Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1365125198-8334-7-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Namhyung Kim [Mon, 1 Apr 2013 11:35:19 +0000 (20:35 +0900)]
perf report: Fix alignment of symbol column when -v is given
When -v option is given, the symbol sort key prints its address also but
it wasn't properly aligned since hists__calc_col_len() misses the
additional part. Also it missed 2 spaces for 0x prefix when printing.
$ perf report --stdio -v -s sym
# Samples: 133 of event 'cycles'
# Event count (approx.): 50536717
#
# Overhead Symbol
# ........ ..............................
#
12.20% 0xffffffff81384c50 v [k] intel_idle
7.62% 0xffffffff8170976a v [k] ftrace_caller
7.02% 0x2d986d B [.] 0x00000000002d986d
Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1364816125-12212-4-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Namhyung Kim [Mon, 1 Apr 2013 11:35:17 +0000 (20:35 +0900)]
perf hists: Fix an invalid memory free on he->branch_info
The branch info was allocated for the whole stack and passed matching
hist entry for each level during processing samples. Thus when a hist
entry tries to free its branch info like in hists__collapse_insert_entry
it'll face following error.
One of the reasons 'perf test' is failing on Power appears to be due to
a bug in isupper().
isupper(c) and islower(c) should be checking 'c' against the mask 0x20.
Instead they are checking sane_ctype[c] which causes isupper() to be
true for lower case letters.
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/20130329192950.GA9312@us.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Li Zefan [Fri, 17 May 2013 02:31:04 +0000 (10:31 +0800)]
watchdog: Disallow setting watchdog_thresh to -1
In old kernels, it's allowed to set softlockup_thresh to -1 or 0
to disable softlockup detection. However watchdog_thresh only
uses 0 to disable detection, and setting it to -1 just froze my
box and nothing I can do but reboot.
Peter Zijlstra [Tue, 28 May 2013 08:55:48 +0000 (10:55 +0200)]
perf: Fix perf mmap bugs
Vince reported a problem found by his perf specific trinity
fuzzer.
Al noticed 2 problems with perf's mmap():
- it has issues against fork() since we use vma->vm_mm for accounting.
- it has an rb refcount leak on double mmap().
We fix the issues against fork() by using VM_DONTCOPY; I don't
think there's code out there that uses this; we didn't hear
about weird accounting problems/crashes. If we do need this to
work, the previously proposed VM_PINNED could make this work.
Aside from the rb reference leak spotted by Al, Vince's example
prog was indeed doing a double mmap() through the use of
perf_event_set_output().
This exposes another problem, since we now have 2 events with
one buffer, the accounting gets screwy because we account per
event. Fix this by making the buffer responsible for its own
accounting.
Reported-by: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Link: http://lkml.kernel.org/r/20130528085548.GA12193@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
In the normal path, an optprobe on an init function is
unregistered when a module goes live.
unregister_kprobe(kp)
-> __unregister_kprobe_top
->__disable_kprobe
->disarm_kprobe(ap == op)
->__disarm_kprobe
->unoptimize_kprobe : the op is queued
on unoptimizing_list
and do nothing in __unregister_kprobe_bottom
After a while (usually wait 5 jiffies), kprobe_optimizer
runs to unoptimize and free optprobe.
kprobe_optimizer
->do_unoptimize_kprobes
->arch_unoptimize_kprobes : moved to free_list
->do_free_cleaned_kprobes
->hlist_del: the op is removed
->free_aggr_kprobe
->arch_remove_optimized_kprobe
->arch_remove_kprobe
->kfree: the op is freed
Here, if kprobes_module_callback is called and the delayed
unoptimizing probe is picked BEFORE kprobe_optimizer runs,
kprobes_module_callback
->kill_kprobe
->kill_optimized_kprobe : dequeued from unoptimizing_list <=!!!
->arch_remove_optimized_kprobe
->arch_remove_kprobe
(but op is not freed, and on the kprobe hash table)
This doesn't happen if the probe unregistration is done AFTER
kprobes_module_callback is called (because at that time the op
is gone), and kprobe-tracer does it.
To fix this bug, this patch changes kprobes_module_callback to
enqueue the op to freeing_list at kill_optimized_kprobe only
if the op is unused. The unused probes on freeing_list will
be freed in do_free_cleaned_kprobes.
Note that this calls arch_remove_*kprobe twice on the
same probe. Thus those functions have to check the double free.
Fortunately, most of arch codes already checked that except
for mips. This will be fixed in the next patch.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Timo Juhani Lindfors <timo.lindfors@iki.fi> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Frank Ch. Eigler <fche@redhat.com> Cc: systemtap@sourceware.org Cc: yrl.pp-manager.tt@hitachi.com Cc: David S. Miller <davem@davemloft.net> Cc: "David S. Miller" <davem@davemloft.net> Link: http://lkml.kernel.org/r/20130522093409.9084.63554.stgit@mhiramat-M0-7522
[ Minor edits. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
Peter Zijlstra [Tue, 21 May 2013 11:05:37 +0000 (13:05 +0200)]
perf/x86/amd: Rework AMD PMU init code
Josh reported that his QEMU is a bad hardware emulator and trips a
WARN in the AMD PMU init code. He requested the WARN be turned into a
pr_err() or similar.
While there, rework the code a little.
Reported-by: Josh Boyer <jwboyer@redhat.com> Acked-by: Robert Richter <rric@kernel.org> Acked-by: Jacob Shin <jacob.shin@amd.com> Cc: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20130521110537.GG26912@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>