He Kuang [Fri, 3 Jun 2016 03:33:13 +0000 (03:33 +0000)]
perf unwind: Move unwind__prepare_access from thread_new into thread__insert_map
To determine the libunwind methods to use, we should get the
32bit/64bit information from maps of a thread. When a thread is newly
created, the information is not prepared. This patch moves
unwind__prepare_access() into thread__insert_map() so we can get the
information we need from maps. Meanwhile, let thread__insert_map()
return value and show messages on error.
Signed-off-by: He Kuang <hekuang@huawei.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ekaterina Tumanova <tumanova@linux.vnet.ibm.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1464924803-22214-5-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
He Kuang [Fri, 3 Jun 2016 03:33:12 +0000 (03:33 +0000)]
perf unwind: Introduce 'struct unwind_libunwind_ops' for local unwind
Currently, libunwind operations are fixed, and they are chosen according
to the host architecture. This will lead to a problem that if a thread
is run as x86_32 on a x86_64 machine, perf will use libunwind methods
for x86_64 to parse the callchain and get wrong results.
This patch changes the fixed methods of libunwind operations to be
thread/map related, and each thread can have individual libunwind
operations. Local libunwind methods are registered as default value.
Signed-off-by: He Kuang <hekuang@huawei.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ekaterina Tumanova <tumanova@linux.vnet.ibm.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1464924803-22214-4-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
He Kuang [Fri, 3 Jun 2016 03:33:11 +0000 (03:33 +0000)]
perf unwind: Decouple thread->address_space on libunwind
Currently, the type of thread->addr_space is unw_addr_space_t, which is
a pointer defined in libunwind headers. For local libunwind, we can
simple include "libunwind.h", but for remote libunwind, the header file
is depends on the target libunwind platform. This patch uses 'void *'
instead to decouple the dependence on libunwind.
Signed-off-by: He Kuang <hekuang@huawei.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ekaterina Tumanova <tumanova@linux.vnet.ibm.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1464924803-22214-3-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
He Kuang [Fri, 3 Jun 2016 03:33:10 +0000 (03:33 +0000)]
perf unwind: Use LIBUNWIND_DIR for remote libunwind feature check
Pass LIBUNWIND_DIR to feature check flags for remote libunwind
tests. So perf can be able to detect remote libunwind libraries from
arbitrary directory.
Signed-off-by: He Kuang <hekuang@huawei.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ekaterina Tumanova <tumanova@linux.vnet.ibm.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1464924803-22214-2-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Taeung Song [Tue, 7 Jun 2016 09:26:12 +0000 (18:26 +0900)]
perf config: Use new perf_config_set__init() to initialize config set
Instead of perf_config(), this function initializes config set by
reading various files: user config ~/.perfconfig and system config
$(sysconfdir)/perfconfig).
If there are the same config variable in both user and system config
files, user config has higher priority than system config.
Signed-off-by: Taeung Song <treeze.taeung@gmail.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1465291577-20973-3-git-send-email-treeze.taeung@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Taeung Song [Tue, 7 Jun 2016 09:26:11 +0000 (18:26 +0900)]
perf config: Constructor should free its allocated memory when failing
Because of die() at perf_parse_file() a config set was freed in
collect_config(), if failed. But it is natural to free a config set
after collect_config() is done when some problems happened.
So, in case of failure, lastly free a config set at perf_config_set__new()
instead of freeing the config set in collect_config().
Signed-off-by: Taeung Song <treeze.taeung@gmail.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1465291577-20973-2-git-send-email-treeze.taeung@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Wang Nan [Tue, 7 Jun 2016 03:54:38 +0000 (03:54 +0000)]
perf tools: Fix crash in build_id_cache__kallsyms_path()
build_id_cache__kallsyms_path() accepts a string buffer but also allocs
a buffer using asnprintf. Unfortunately, the its only user passes it a
stack-allocated buffer. Freeing it causes crashes like this:
This patch simplifies build_id_cache__kallsyms_path(), not even
considering allocating a string buffer, so never frees anything. Its
caller should manage memory allocation.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Fixes: 01412261d994 ("perf buildid-cache: Use path/to/bin/buildid/elf instead of path/to/bin/buildid") Link: http://lkml.kernel.org/r/1465271678-7392-1-git-send-email-wangnan0@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
For consistency with class__priv() elsewhere, and with the callback
typedef for clearing those areas (e.g. bpf_map_clear_priv_t).
Acked-by: Wang Nan <wangnan0@huawei.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-rnbiyv27ohw8xppsgx0el3xb@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
tools lib bpf: Make bpf_program__get_private() use IS_ERR()
For consistency with bpf_map__priv() and elsewhere.
Acked-by: Wang Nan <wangnan0@huawei.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-x17nk5mrazkf45z0l0ahlmo8@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
tools lib bpf: Remove _get_ from non-refcount method names
The use of this term is not warranted here, we use it in the kernel
sources and in tools/ for refcounting, so, for consistency, rename them.
Acked-bu: Wang Nan <wangnan0@huawei.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-4ya1ot2e2fkrz48ws9ebiofs@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
tools lib bpf: Rename bpf_map__get_fd() to bpf_map__fd()
For consistency, leaving "get" for reference counting.
Acked-by: Wang Nan <wangnan0@huawei.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-msy8sxfz9th6gl2xjeci2btm@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
tools lib bpf: Use IS_ERR() reporting macros with bpf_map__get_def()
And for consistency, rename it to bpf_map__def(), leaving "get" for
reference counting.
Also make it return a const pointer, as suggested by Wang.
Acked-by: Wang Nan <wangnan0@huawei.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-mer00xqkiho0ymg66b5i9luw@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
tools lib bpf: Rename bpf_map__get_name() to bpf_map__name()
For consistency, leaving "get" for reference counting.
Acked-by: Wang Nan <wangnan0@huawei.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-crnflv84ejyhpba933ec71gs@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
tools lib bpf: Use IS_ERR() reporting macros with bpf_map__get_private()
To try to, over time, consistently use the IS_ERR() interface instead of
using two return values, i.e. the integer return value for an error and
the pointer address to return the bpf_map->priv pointer.
Also rename it to bpf__priv(), to leave the "get" term for reference
counting.
Noticed while working on using BPF for collecting non-integer syscall
argument payloads (struct sockaddr in calls such as connect(), for
instance), where we need to use BPF maps and thus generalise
bpf__setup_stdout() to connect bpf_output events with maps in a bpf
proggie.
Acked-by: Wang Nan <wangnan0@huawei.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-saypxyd6ptrct379jqgxx4bl@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Taeung Song [Mon, 6 Jun 2016 10:52:54 +0000 (19:52 +0900)]
perf config: Handle the error when config set is NULL at collect_config()
collect_config() collect all config key-value pairs from config files
and put each config info in config set. But if config set (i.e. 'set'
variable at collect_config()) is NULL, this is wrong so handle it.
Signed-off-by: Taeung Song <treeze.taeung@gmail.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1465210380-26749-4-git-send-email-treeze.taeung@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Taeung Song [Mon, 6 Jun 2016 10:52:52 +0000 (19:52 +0900)]
perf config: Fix abnormal termination at perf_parse_file()
If a config file has wrong key-value pairs, the perf process will be
forcibly terminated by die() at perf_parse_file() called by
perf_config() so terminal settings can be crushed because of unusual
termination.
For example:
If user config file has a wrong value 'red;default' instead of a normal
value like 'red, default' for a key 'colors.top',
# cat ~/.perfconfig
[colors]
medium = red;default # wrong value
and if running sub-command 'top',
# perf top
perf process is dead by force and terminal setting is broken
with a messge like below.
Fatal: bad config file line 2 in /root/.perfconfig
So fix it.
If perf_config() can return on failure without calling die()
at perf_parse_file(), this problem can be solved.
And if a config file has wrong values, show the error message
and then use default config values instead of wrong config values.
Signed-off-by: Taeung Song <treeze.taeung@gmail.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1465210380-26749-2-git-send-email-treeze.taeung@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Andi Kleen [Tue, 24 May 2016 19:52:39 +0000 (12:52 -0700)]
perf stat: Add missing aggregation headers for --metric-only CSV
When in CSV mode --metric-only outputs an header, unlike the other
modes. Previously it did not properly print headers for the aggregation
columns, so the headers were actually shifted against the real values.
Fix this here by outputting the correct headers for CSV.
Andi Kleen [Tue, 24 May 2016 19:52:37 +0000 (12:52 -0700)]
perf stat: Add computation of TopDown formulas
Implement the TopDown formulas in 'perf stat'. The topdown basic metrics
reported by the kernel are collected, and the formulas are computed and
output as normal metrics.
See the kernel commit exporting the events for details on the used
metrics.
v2: Always print all metrics, only use thresholds for coloring.
v3: Mark retiring over threshold green, not red.
v4: Only print one decimal digit
Fix color printing of one metric
v5: Avoid printing -0.0
v6: Remove extra frontend event lookup
Andi Kleen [Mon, 30 May 2016 15:49:42 +0000 (12:49 -0300)]
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Andi Kleen [Mon, 6 Jun 2016 14:36:06 +0000 (07:36 -0700)]
perf test: Ignore .scale and other special files
'perf test' tries to parse all entries in /sys/devices/cpu/events/.
Ignore the special entries like '.scale', which cannot be directly
parsed as an event. This patch assumes all files containing a '.' are
special and can be ignored.
He Kuang [Mon, 16 May 2016 04:51:19 +0000 (04:51 +0000)]
perf script: Show call graphs when 1st event doesn't have it but some other has
There's a display inconsistency when there are multiple tracepoint
events, some of which have the 'call-graph' config option set but the
first one hasn't, i.e. the whole logic for call graph processing is
enabled only if the first tracepoint event has call-graph set.
For instance, if we record signal_deliver with call-graph and
signal_generate without:
$ perf record -g -a -e signal:signal_deliver -e signal:signal_generate/call-graph=no/
[ perf record: Captured and wrote 0.017 MB perf.data (2 samples) ]
This time, the callchain of the event signal_deliver disappeared. The
problem is caused by that perf only checks for the first evsel in evlist
and decides if callchain should be printed.
This patch traverses all evsels in evlist to see if any of them have
callchains, and shows the right result:
Signed-off-by: He Kuang <hekuang@huawei.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1463374279-97209-1-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Wang Nan [Tue, 31 May 2016 13:06:15 +0000 (13:06 +0000)]
perf evlist: Fix alloc_mmap() failure path
If zalloc fail, setting evlist->mmap[i].fd is unsafe and
perf_evlist__alloc_mmap() should bail out right after that.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Fixes: d4c6fb36ac2c ("perf evsel: Record fd into perf_mmap") Link: http://lkml.kernel.org/r/1464699975-230440-1-git-send-email-wangnan0@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf evsel: Provide way to extract integer value from format_field
Out of perf_evsel__intval(), that requires passing the variable name,
that will then be searched in the list of tracepoint variables for the
given evsel.
In cases such as syscall file descriptor ("fd") tracking, this is
wasteful, we need just to use perf_evsel__field() and cache the
format_field.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/n/tip-r6f89jx9j5nkx037d0naviqy@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Andi Kleen [Fri, 20 May 2016 00:09:58 +0000 (17:09 -0700)]
perf/x86/intel: Add topdown events to Intel Atom
Add topdown event declarations to Silvermont / Airmont.
These cores do not support the full Top Down metrics, but an useful
subset (FrontendBound, Retiring, Backend Bound/Bad Speculation).
The perf stat tool automatically handles the missing events
and combines the available metrics.
Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: acme@kernel.org Cc: jolsa@kernel.org Link: http://lkml.kernel.org/r/1463703002-19686-5-git-send-email-andi@firstfloor.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Andi Kleen [Fri, 20 May 2016 00:09:57 +0000 (17:09 -0700)]
perf/x86/intel: Add topdown events to Intel Core
Add declarations for the events needed for topdown to the
Intel big core CPUs starting with Sandy Bridge. We need
to report different values if HyperThreading is on or off.
The only thing this patch does is to export some events
in sysfs.
topdown level 1 uses a set of abstracted metrics which
are generic to out of order CPU cores (although some
CPUs may not implement all of them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
A slot is a single operation in the CPU pipe line.
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
The formulas to compute the metrics are generic, they
only change based on the availability on the abstracted
input values.
The kernel declares the events supported by the current
CPU and their scaling factors (such as the pipeline width)
and perf stat then computes the formulas based on the
available metrics. This is similar how existing
perf metrics, such as TSC metrics or IPC, are implemented.
This abstracts all CPU pipe line specific knowledge in the
kernel driver, but still avoids the need for larger scale perf
interface changes.
For HyperThreading the any bit is needed to get accurate
values when both threads are executing. This implies that
the events can only be collected as root or with
perf_event_paranoid=-1 for now.
The basic scheme is based on the following paper:
Yasin, A Top Down Method for Performance analysis and Counter architecture ISPASS14 (pdf available via google)
Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: acme@kernel.org Cc: jolsa@kernel.org Link: http://lkml.kernel.org/r/1463703002-19686-4-git-send-email-andi@firstfloor.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Andi Kleen [Fri, 20 May 2016 00:09:56 +0000 (17:09 -0700)]
perf/x86: Support sysfs files depending on SMT status
Add a way to show different sysfs events attributes depending on
HyperThreading is on or off. This is difficult to determine
early at boot, so we just do it dynamically when the sysfs
attribute is read.
Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: acme@kernel.org Cc: jolsa@kernel.org Link: http://lkml.kernel.org/r/1463703002-19686-3-git-send-email-andi@firstfloor.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Andi Kleen [Fri, 20 May 2016 00:09:55 +0000 (17:09 -0700)]
x86/topology: Add topology_max_smt_threads()
For SMT specific workarounds it is useful to know if SMT is active
on any online CPU in the system. This currently requires a loop
over all online CPUs.
Add a global variable that is updated with the maximum number
of smt threads on any CPU on online/offline, and use it for
topology_max_smt_threads()
The single call is easier to use than a loop.
Not exported to user space because user space already can use
the existing sibling interfaces to find this out.
Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: acme@kernel.org Cc: jolsa@kernel.org Link: http://lkml.kernel.org/r/1463703002-19686-2-git-send-email-andi@firstfloor.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Vineet Gupta [Mon, 9 May 2016 09:37:39 +0000 (15:07 +0530)]
tools/perf: Handle -EOPNOTSUPP for sampling events
This allows (with a previous change to the perf error return ABI) for
calling out in userspace the exact reason for perf record failing
when PMU doesn't support overflow interrupts.
Note that this needs to be put ahead of existing precise_ip check as
that gets hit otherwise for the sampling fail case as well.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: <acme@redhat.com> Cc: <linux-snps-arc@lists.infradead.org> Cc: <vincent.weaver@maine.edu> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: Vineet Gupta <Vineet.Gupta1@synopsys.com> Link: http://lkml.kernel.org/r/1462786660-2900-2-git-send-email-vgupta@synopsys.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Vineet Gupta [Mon, 9 May 2016 09:37:40 +0000 (15:07 +0530)]
perf/abi: Change the errno for sampling event not supported in hardware
Change the return code for sampling event not supported from -ENOTSUPP
to -EOPNOTSUPP.
This allows userspace to identify this case specifically, instead of
printing the catch-all error message it did previously.
Technically this is an ABI change, but we think we can get away
with it.
Old behavior:
-------
| # perf record ls
| Error:
| The sys_perf_event_open() syscall returned with 524 (Unknown error 524)
| for event (cycles:ppp).
| /bin/dmesg may provide additional information.
| No CONFIG_PERF_EVENTS=y kernel support configured?
New behavior:
-------
| # perf record ls
| Error:
| PMU Hardware doesn't support sampling/overflow-interrupts.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: <acme@redhat.com> Cc: <linux-snps-arc@lists.infradead.org> Cc: <vincent.weaver@maine.edu> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: Vineet Gupta <Vineet.Gupta1@synopsys.com> Link: http://lkml.kernel.org/r/1462786660-2900-3-git-send-email-vgupta@synopsys.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Kan Liang [Mon, 16 May 2016 06:18:24 +0000 (23:18 -0700)]
perf/x86/intel/uncore: Locate specific box by checking full device info
Some platforms, e.g. Knights Landing, use a common PCI device ID for
multiple instances of an uncore PMU device type. So it is impossible to
locate the specific instances only by PCI device ID.
The current code specially handles Knights Landing by arbitrarily pointing
an instance to an unused uncore box. However, we still have no idea
which uncore device is mapped to which box.
Furthermore, there could be more platforms which use a common PCI device ID
for uncore devices. We have to specially handle them one by one.
This patch records full device information (slot, func, and device ID)
in id_table[]. So the probe function can point the instance to a specific
uncore box by checking the full device information.
Tested-by: Lukasz Odzioba <lukasz.odzioba@intel.com> Signed-off-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: tglx@linutronix.de Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: bp@suse.de Cc: harish.chegondi@intel.com Cc: hubert.chrzaniuk@intel.com Cc: lawrence.f.meadows@intel.com Link: http://lkml.kernel.org/r/1463379504-39003-1-git-send-email-kan.liang@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Lukasz Odzioba [Mon, 16 May 2016 21:16:18 +0000 (23:16 +0200)]
perf/x86/intel: Add 'static' keyword to locally used arrays
Add the 'static' keyword to intel_bdw_event_constraints[], snb_events_attrs[],
nhm_events_attrs[] and intel_skl_event_constraints arrays[], because
they are only used locally.
Signed-off-by: Lukasz Odzioba <lukasz.odzioba@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: akpm@linux-foundation.org Cc: hpa@zytor.com Cc: kan.liang@intel.com Cc: lukasz.anaczkowski@intel.com Cc: zheng.z.yan@intel.com Link: http://lkml.kernel.org/r/1463433378-16816-1-git-send-email-lukasz.odzioba@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
... which commit unconditionally sets the perf_sample_allowed_ns value
to !0. But that could trigger a bug in the following corner case:
The user can disable the dynamic interrupt throttle mechanism by setting
perf_cpu_time_max_percent to 0. Then they change perf_event_max_sample_rate.
For this case, the mechanism will be enabled implicitly, because
perf_sample_allowed_ns becomes !0 - which is not what we want.
This patch only updates perf_sample_allowed_ns when the dynamic
interrupt throttle mechanism is enabled.
Signed-off-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: acme@kernel.org Link: http://lkml.kernel.org/r/1462260366-3160-1-git-send-email-kan.liang@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Peter Zijlstra [Thu, 12 May 2016 15:26:46 +0000 (17:26 +0200)]
perf/core: Rename the perf_event_aux*() APIs to perf_event_sb*(), to separate them from AUX ring-buffer records
There are now two different things called AUX in perf, the
infrastructure to deliver the mmap/comm/task records and the
AUX part in the mmap buffer (with associated AUX_RECORD).
Since the former is internal, rename it to side-band to reduce
the confusion factor.
No change in functionality.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Kan Liang [Wed, 23 Mar 2016 18:24:37 +0000 (11:24 -0700)]
perf/core: Optimize side-band event delivery
The perf_event_aux() function iterates all PMUs and all events in
their respective per-CPU contexts to find the events to deliver
side-band records to.
For example, the brk test case in lkp triggers many mmap() operations,
which, if we're also running perf, results in many perf_event_aux()
invocations.
If we enable uncore PMU support (even when uncore events are not used),
dozens of uncore PMUs will be iterated, which can significantly
decrease brk_test's throughput.
For example, the brk throughput:
without uncore PMUs: 2647573 ops_per_sec
with uncore PMUs: 1768444 ops_per_sec
... a 33% reduction.
To get at the per-CPU events that need side-band records, this patch
puts these events on a per-CPU list, this avoids iterating the PMUs
and any events that do not need side-band records.
Per task events are unchanged to avoid extra overhead on the context
switch paths.
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reported-by: Huang, Ying <ying.huang@linux.intel.com> Signed-off-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: http://lkml.kernel.org/r/1458757477-3781-1-git-send-email-kan.liang@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Now this patch introduces a way to configure this per event, i.e. this
becomes possible:
$ perf record -e sched:*/max-stack=2/ -e block:*/max-stack=10/ -a
allowing finer tuning of how much buffer space callchains use.
This uses an u16 from the reserved space at the end, leaving another
u16 for future use.
There has been interest in even finer tuning, namely to control the
max stack for kernel and userspace callchains separately. Further
discussion is needed, we may for instance use the remaining u16 for
that and when it is present, assume that the sample_max_stack introduced
in this patch applies for the kernel, and the u16 left is used for
limiting the userspace callchain. (Arnaldo Carvalho de Melo)
Infrastructure changes:
- Adopt get_main_thread from db-export.c (Andi Kleen)
- More prep work for backward ring buffer support (Wang Nan)
- Prep work for supporting SDT (Statically Defined Tracing)
tracepoints (Masami Hiramatsu)
- Add arch/*/include/generated/ to .gitignore (Taeung Song)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Masami Hiramatsu [Sat, 28 May 2016 15:15:37 +0000 (00:15 +0900)]
perf buildid-cache: Use path/to/bin/buildid/elf instead of path/to/bin/buildid
Use path/to/bin/buildid/elf instead of path/to/bin/buildid
to store corresponding elf binary.
This also stores vdso in buildid/vdso, kallsyms in buildid/kallsyms.
Note that the existing caches are not updated until user adds
or updates the cache. Anyway, if there is the old style build-id
cache it falls back to use it. (IOW, it is backward compatible)
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Hemant Kumar <hemant@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20160528151537.16098.85815.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Masami Hiramatsu [Sat, 28 May 2016 15:15:13 +0000 (00:15 +0900)]
perf symbols: Introduce filename__readable to check readability
Introduce filename__readable to check readability by opening the file
directly. Since the access(R_OK) just checks the readability based on
real UID/GID, it is ignored that the effective UID/GID and capabilities
for some special file (e.g. /proc/kcore).
filename__readable() directly opens given file with O_RDONLY so that the
kernel checks it by effective UID/GID and capabilities.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Hemant Kumar <hemant@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20160528151513.16098.97576.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Wang Nan [Wed, 25 May 2016 13:44:57 +0000 (13:44 +0000)]
tools: Pass arg to fdarray__filter's call back function
Before this patch there's no way to pass arguments to fdarray__filter's
call back function.
This improvement will be used by 'perf record' to support unmapping ring
buffer for both main evlist and overwrite evlist. Without this patch
there's no way to track overwrite evlist from 'struct fdarray'.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1464183898-174512-10-git-send-email-wangnan0@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Wang Nan [Wed, 25 May 2016 13:44:50 +0000 (13:44 +0000)]
perf evlist: Choose correct reading direction according to evlist->backward
Now we have evlist->backward to indicate the mmap direction. Make
perf_evlist__mmap_read() choose right direction automatically.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1464183898-174512-3-git-send-email-wangnan0@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Wang Nan [Wed, 25 May 2016 13:44:49 +0000 (13:44 +0000)]
perf evlist: Check 'base' pointer before checking refcnt when put a mmap
evlist->mmap[i]->refcnt could be 0 if an evlist has no evsel or if all
evsels don't match the evlist during mmap. For example, when all evsels
are overwritable but the evlist itself is normal. To avoid crashing,
perf should check 'base' pointer before checking refcnt, and raise bug
only when base is not NULL.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1464183898-174512-2-git-send-email-wangnan0@huawei.com
[ Renamed 'mmap' variable, it is reserved in old distros such as Ubuntu 12.04, breaking the build ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Wang Nan [Tue, 24 May 2016 02:28:59 +0000 (02:28 +0000)]
perf evlist: Don't poll and mmap overwritable events
There's no need to receive events from overwritable ring buffer.
Instead, perf should make them run in background until some external
event of interest takes place. This patch makes ignores normal events from
overwrite evlists.
Overwritable events must be mapped readonly and backward, so if evlist
and evsel doesn't match (evsel->overwrite is true but either evlist is
read/write or evlist is not backward, and vice versa), skip mapping it.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1464056944-166978-3-git-send-email-wangnan0@huawei.com Signed-off-by: He Kuang <hekuang@huawei.com>
[ Split from a larger patch ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
It is possible that all events in an evlist are overwritable.
perf_event__synth_time_conv() should not crash in this case.
record__pick_pc() is used to check avaliability.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1464056944-166978-3-git-send-email-wangnan0@huawei.com Signed-off-by: He Kuang <hekuang@huawei.com>
[ Split from a larger patch ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Additionally to being able to control the system wide maximum depth via
/proc/sys/kernel/perf_event_max_stack, now we are able to ask for
different depths per event, using perf_event_attr.sample_max_stack for
that.
This uses an u16 hole at the end of perf_event_attr, that, when
perf_event_attr.sample_type has the PERF_SAMPLE_CALLCHAIN, if
sample_max_stack is zero, means use perf_event_max_stack, otherwise
it'll be bounds checked under callchain_mutex.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: Wang Nan <wangnan0@huawei.com> Cc: Zefan Li <lizefan@huawei.com> Link: http://lkml.kernel.org/n/tip-kolmn1yo40p7jhswxwrc7rrd@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Wang Nan [Tue, 24 May 2016 09:21:28 +0000 (09:21 +0000)]
perf record: Fix crash when kptr is restricted
Before this patch, a simple 'perf record' could fail if kptr_restrict is
set to 1 (for normal user) or 2 (for root):
# perf record ls
WARNING: Kernel address maps (/proc/{kallsyms,modules}) are restricted,
check /proc/sys/kernel/kptr_restrict.
Samples in kernel functions may not be resolved if a suitable vmlinux
file is not found in the buildid cache or in the vmlinux path.
Samples in kernel modules won't be resolved at all.
If some relocation was applied (e.g. kexec) symbols may be misresolved
even with a suitable vmlinux or kallsyms file.
Segmentation fault (core dumped)
This patch skips perf_event__synthesize_kernel_mmap() when kptr is not
available.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Fixes: 45e90056904b ("perf machine: Do not bail out if not managing to read ref reloc symbol") Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1464081688-167940-2-git-send-email-wangnan0@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Wang Nan [Tue, 24 May 2016 09:21:27 +0000 (09:21 +0000)]
perf symbols: Check kptr_restrict for root
If kptr_restrict is set to 2, even root is not allowed to see pointers.
This patch checks kptr_restrict even if euid == 0. For root, report
error if kptr_restrict is 2.
On rapl cleanup path, kfree() is given by mistake the address of the
pointer of the structure to free (rapl_pmus->pmus + i). Pass the pointer
instead (rapl_pmus->pmus[i]).
Ingo Molnar [Tue, 24 May 2016 05:40:52 +0000 (07:40 +0200)]
Merge tag 'perf-core-for-mingo-20160523' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent
Pull perf/core improvements from Arnaldo Carvalho de Melo:
User visible changes:
- Add "srcline_from" and "srcline_to" branch sort keys to 'perf top' and
'perf report' (Andi Kleen)
Infrastructure changes:
- Make 'perf trace' auto-attach fd->name and ptr->name beautifiers based
on the name of syscall arguments, this way new syscalls that have
'const char * (path,pathname,filename)' will use the fd->name beautifier
(vfs_getname perf probe, if in place) and the 'fd->name' (vfs_getname
or via /proc/PID/fd/) (Arnaldo Carvalho de Melo)
- Infrastructure to read from a ring buffer in backward write mode (Wang Nan)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Wang Nan [Mon, 23 May 2016 07:13:41 +0000 (07:13 +0000)]
perf record: Read from backward ring buffer
Introduce rb_find_range() to find start and end position from a backward
ring buffer.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1463987628-163563-5-git-send-email-wangnan0@huawei.com Signed-off-by: He Kuang <hekuang@huawei.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Wang Nan [Mon, 23 May 2016 07:13:40 +0000 (07:13 +0000)]
perf record: Rename variable to make code clear
record__mmap_read() writes data from ring buffer into perf.data. 'head'
is maintained by the kernel, points to the last written record.
'old' is maintained by perf, points to the record read in previous
round. record__mmap_read() saves data from 'old' to 'head' to
perf.data.
The names of these variables are not very intutive. In addition,
when dealing with backward writing ring buffer, the md->prev pointer
should point to 'head' instead of the last byte it got.
Add 'start' and 'end' pointer to make code clear and set md->prev to
'head' instead of the moved 'old' pointer. This patch doesn't change
behavior since:
buf = &data[old & md->mask];
size = head - old;
old += size; <--- Here, old == head
Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1463987628-163563-4-git-send-email-wangnan0@huawei.com Signed-off-by: He Kuang <hekuang@huawei.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Wang Nan [Mon, 23 May 2016 07:13:39 +0000 (07:13 +0000)]
perf record: Prevent reading invalid data in record__mmap_read
When record__mmap_read() requires data more than the size of ring
buffer, drop those data to avoid accessing invalid memory.
This can happen when reading from overwritable ring buffer, which
should be avoided. However, check this for robustness.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1463987628-163563-3-git-send-email-wangnan0@huawei.com Signed-off-by: He Kuang <hekuang@huawei.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Wang Nan [Mon, 23 May 2016 07:13:38 +0000 (07:13 +0000)]
perf evlist: Add API to pause/resume
perf_evlist__toggle_{pause,resume}() are introduced to pause/resume
events in an evlist. Utilize PERF_EVENT_IOC_PAUSE_OUTPUT ioctl.
Following commits use them to ensure overwrite ring buffer is paused
before reading.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1463987628-163563-2-git-send-email-wangnan0@huawei.com Signed-off-by: He Kuang <hekuang@huawei.com>
[ Return -1, like all other ioctl() usage in evlist.c, rename 'pause'
arg to avoid breaking the build on ubuntu 12.04 and other old systems ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf trace: Use the ptr->name beautifier as default for "filename" args
Auto-attach the ptr->name beautifier to syscall args "filename", "path"
and "pathname" if they are of type "const char *".
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/n/tip-jxii4qmcgoppftv0zdvml9d7@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf trace: Use the fd->name beautifier as default for "fd" args
Noticed when the 'setsockopt' 'fd' arg wasn't being formatted via
the SCA_FD beautifier, so just remove the setting of "fd" args to
SCA_FD and do it when reading the syscall info, like we do for
args of type "pid_t", i.e. "fd" as the name should be enough as
the decision to use the SFA_FD beautifier. For odd cases we can
just do it explicitely.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/n/tip-0qissgetiuqmqyj4b6ancmpn@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Andi Kleen [Fri, 20 May 2016 20:15:08 +0000 (13:15 -0700)]
perf report: Add srcline_from/to branch sort keys
Add "srcline_from" and "srcline_to" branch sort keys that allow to show
the source lines of a branch.
That makes it much easier to track down where particular branches happen
in the program, for example to examine branch mispredictions, or to
associate it with cycle counts:
% perf record -b -e cycles:p ./tcall
% perf report --sort srcline_from,srcline_to,mispredict
...
15.10% tcall.c:18 tcall.c:10 N
14.83% tcall.c:11 tcall.c:5 N
14.12% tcall.c:7 tcall.c:12 N
14.04% tcall.c:12 tcall.c:5 N
12.42% tcall.c:17 tcall.c:18 N
12.39% tcall.c:7 tcall.c:13 N
12.27% tcall.c:13 tcall.c:17 N
...
Wang Nan [Fri, 20 May 2016 16:38:24 +0000 (16:38 +0000)]
perf evsel: Record fd into perf_mmap
Add a fd field into struct perf_mmap so that perf can track the mmap fd.
This feature will be used for toggling overwrite ring buffers.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1463762315-155689-3-git-send-email-wangnan0@huawei.com Signed-off-by: He Kuang <hekuang@huawei.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Wang Nan [Fri, 20 May 2016 16:38:23 +0000 (16:38 +0000)]
perf evsel: Add overwrite attribute and check write_backward
Add 'overwrite' attribute to evsel to mark whether this event is
overwritable. The following commits will support syntax like:
# perf record -e cycles/overwrite/ ...
An overwritable evsel requires kernel support for the
perf_event_attr.write_backward ring buffer feature.
Add it to perf_missing_feature.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1463762315-155689-2-git-send-email-wangnan0@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Ingo Molnar [Fri, 20 May 2016 17:37:43 +0000 (19:37 +0200)]
Merge tag 'perf-core-for-mingo-20160520' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
User visible changes:
- We should not use the current value of the kernel.perf_event_max_stack as the
default value for --max-stack in tools that can process perf.data files, they
will only match if that sysctl wasn't changed from its default value at the
time the perf.data file was recorded, fix it.
This fixes a bug where a 'perf record -a --call-graph dwarf ; perf report'
produces a glibc invalid free backtrace (Arnaldo Carvalho de Melo)
- Provide a better warning when running 'perf trace' on a system where the
kernel.kptr_restrict is set to 1, similar to the one produced by 'perf record',
noticed on ubuntu 16.04 where this is the default kptr_restrict setting.
(Arnaldo Carvalho de Melo)
- Fix ordering of instructions in the annotation code, noticed when annotating
ARM binaries, now that table is auto-ordered at first use, to avoid more such
problems (Chris Ryder)
- Set buildid dir under symfs when --symfs is provided (He Kuang)
- Fix the 'exit_group()' syscall output in 'perf trace' (Arnaldo Carvalho de Melo)
- Only auto set call-graph to "dwarf" in 'perf trace' when syscalls are being
traced (Arnaldo Carvalho de Melo)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
He Kuang [Thu, 19 May 2016 11:47:37 +0000 (11:47 +0000)]
perf tools: Set buildid dir under symfs when --symfs is provided
This patch moves the reference of buildid dir to 'symfs/.debug' and
skips the local buildid dir when '--symfs' is given, so that every
single file opened by perf is relative to symfs directory now.
Signed-off-by: He Kuang <hekuang@huawei.com> Acked-by: David Ahern <dsahern@gmail.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ekaterina Tumanova <tumanova@linux.vnet.ibm.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1463658462-85131-2-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf trace: Only auto set call-graph to "dwarf" when syscalls are being traced
When --min-stack or --max-stack is passwd but --no-syscalls is also in
effect, there is no point in automatically setting '--call-graph dwarf'.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/n/tip-pq922i7h9wef0pho1dqpttvn@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Chris Ryder [Thu, 19 May 2016 16:59:46 +0000 (17:59 +0100)]
perf annotate: Sort list of recognised instructions
Currently the list of instructions recognised by perf annotate has to be
explicitly written in sorted order. This makes it easy to make mistakes
when adding new instructions. Sort the list of instructions on first
access.
Signed-off-by: Chris Ryder <chris.ryder@arm.com> Acked-by: Pawel Moll <pawel.moll@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/4268febaf32f47f322c166fb2fe98cfec7041e11.1463676839.git.chris.ryder@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Chris Ryder [Thu, 19 May 2016 16:59:45 +0000 (17:59 +0100)]
perf annotate: Fix identification of ARM blt and bls instructions
The ARM blt and bls instructions are not correctly identified when
parsing assembly because the list of recognised instructions must be
sorted by name. Swap the ordering of blt and bls.
Signed-off-by: Chris Ryder <chris.ryder@arm.com> Acked-by: Pawel Moll <pawel.moll@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/560e196b7c79b7ff853caae13d8719a31479cb1a.1463676839.git.chris.ryder@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf callchain: Stop validating callchains by the max_stack sysctl
As thread__resolve_callchain_sample can be used for handling perf.data
files, that could've been recorded with a large max_stack sysctl setting
than what the system used for analysis has set.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: Wang Nan <wangnan0@huawei.com> Cc: Zefan Li <lizefan@huawei.com> Link: http://lkml.kernel.org/n/tip-2995bt2g5yq2m05vga4kip6m@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/n/tip-y18oeou494uy11im7u9to0dx@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf trace: Warn when trying to resolve kernel addresses with kptr_restrict=1
Hook into the libtraceevent plugin kernel symbol resolver to warn the
user that that can't happen with kptr_restrict=1.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/n/tip-9gc412xx1gl0lvqj1d1xwlyb@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf machine: Do not bail out if not managing to read ref reloc symbol
This means the user can't access /proc/kallsyms, for instance, because
/proc/sys/kernel/kptr_restrict is set to 1.
Instead leave the ref_reloc_sym as NULL and code using it will cope.
This allows 'perf trace' to work on such systems for !root, the only
issue would be when trying to resolve kernel symbols, which happens,
for instance, in some libtracevent plugins. A warning for that case
will be provided in the next patch in this series.
Noticed in Ubuntu 16.04, that comes with kptr_restrict=1.
Reported-by: Milian Wolff <milian.wolff@kdab.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/n/tip-knpu3z4iyp2dxpdfm798fac4@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Ingo Molnar [Fri, 20 May 2016 06:19:20 +0000 (08:19 +0200)]
Merge tag 'perf-core-for-mingo-20160516' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
User visible changes:
- Honour the kernel.perf_event_max_stack knob more precisely by not counting
PERF_CONTEXT_{KERNEL,USER} when deciding when to stop adding entries to
the perf_sample->ip_callchain[] array (Arnaldo Carvalho de Melo)
- Fix identation of 'stalled-backend-cycles' in 'perf stat' (Namhyung Kim)
- Update runtime using 'cpu-clock' event in 'perf stat' (Namhyung Kim)
- Use 'cpu-clock' for cpu targets in 'perf stat' (Namhyung Kim)
- Avoid fractional digits for integer scales in 'perf stat' (Andi Kleen)
- Store vdso buildid unconditionally, as it appears in callchains and
we're not checking those when creating the build-id table, so we
end up not being able to resolve VDSO symbols when doing analysis
on a different machine than the one where recording was done, possibly
of a different arch even (arm -> x86_64) (He Kuang)
Infrastructure changes:
- Generalize max_stack sysctl handler, will be used for configuring
multiple kernel knobs related to callchains (Arnaldo Carvalho de Melo)
Cleanups:
- Introduce DSO__NAME_KALLSYMS and DSO__NAME_KCORE, to stop using
open coded strings (Masami Hiramatsu)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Jiri Olsa [Wed, 18 May 2016 06:16:10 +0000 (08:16 +0200)]
perf/x86/intel/uncore: Remove WARN_ON_ONCE in uncore_pci_probe
When booting with nr_cpus=1, uncore_pci_probe tries to init the PCI/uncore
also for the other packages and fails with warning when they are not found.
The warning is bogus because it's correct to fail here for packages which are
not initialized. Remove it and return silently.
Fixes: cf6d445f6897 "perf/x86/uncore: Track packages, not per CPU data" Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: stable@vger.kernel.org Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
perf tools: Separate accounting of contexts and real addresses in a stack trace
The perf_sample->ip_callchain->nr value includes all the entries in the
ip_callchain->ip[] array, real addresses and PERF_CONTEXT_{KERNEL,USER,etc},
while what the user expects is that what is in the kernel.perf_event_max_stack
sysctl or in the upcoming per event perf_event_attr.sample_max_stack knob be
honoured in terms of IP addresses in the stack trace.
So match the kernel support and validate chain->nr taking into account
both kernel.perf_event_max_stack and kernel.perf_event_max_contexts_per_stack.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: Wang Nan <wangnan0@huawei.com> Cc: Zefan Li <lizefan@huawei.com> Link: http://lkml.kernel.org/n/tip-mgx0jpzfdq4uq4abfa40byu0@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf core: Separate accounting of contexts and real addresses in a stack trace
The perf_sample->ip_callchain->nr value includes all the entries in the
ip_callchain->ip[] array, real addresses and PERF_CONTEXT_{KERNEL,USER,etc},
while what the user expects is that what is in the kernel.perf_event_max_stack
sysctl or in the upcoming per event perf_event_attr.sample_max_stack knob be
honoured in terms of IP addresses in the stack trace.
So allocate a bunch of extra entries for contexts, and do the accounting
via perf_callchain_entry_ctx struct members.
A new sysctl, kernel.perf_event_max_contexts_per_stack is also
introduced for investigating possible bugs in the callchain
implementation by some arch.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: Wang Nan <wangnan0@huawei.com> Cc: Zefan Li <lizefan@huawei.com> Link: http://lkml.kernel.org/n/tip-3b4wnqk340c4sg4gwkfdi9yk@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
We need have different helpers to account how many contexts we have in
the sample and for real addresses, so do it now as a prep patch, to
ease review.
Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-q964tnyuqrxw5gld18vizs3c@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf core: Add a 'nr' field to perf_event_callchain_context
We will use it to count how many addresses are in the entry->ip[] array,
excluding PERF_CONTEXT_{KERNEL,USER,etc} entries, so that we can really
return the number of entries specified by the user via the relevant
sysctl, kernel.perf_event_max_contexts, or via the per event
perf_event_attr.sample_max_stack knob.
This way we keep the perf_sample->ip_callchain->nr meaning, that is the
number of entries, be it real addresses or PERF_CONTEXT_ entries, while
honouring the max_stack knobs, i.e. the end result will be max_stack
entries if we have at least that many entries in a given stack trace.
Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-s8teto51tdqvlfhefndtat9r@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf core: Pass max stack as a perf_callchain_entry context
This makes perf_callchain_{user,kernel}() receive the max stack
as context for the perf_callchain_entry, instead of accessing
the global sysctl_perf_event_max_stack.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: Wang Nan <wangnan0@huawei.com> Cc: Zefan Li <lizefan@huawei.com> Link: http://lkml.kernel.org/n/tip-kolmn1yo40p7jhswxwrc7rrd@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Namhyung Kim [Fri, 13 May 2016 06:01:03 +0000 (15:01 +0900)]
perf stat: Use cpu-clock event for cpu targets
Currently 'perf stat' always counts task-clock event by default. But
it's somewhat confusing for system-wide targets (especially with 'sleep
N' as the 'sleep' task just sleeps and doesn't use cputime). Changing
to cpu-clock event instead for that case makes more sense IMHO.
Namhyung Kim [Fri, 13 May 2016 06:01:02 +0000 (15:01 +0900)]
perf stat: Update runtime using cpu-clock event
Currently only the task-clock event updates the runtime_nsec so it
cannot show the metric when using cpu-clock events. However cpu clock
works basically same as task-clock, so no need to not update the runtime
IMHO.
Before:
# perf stat -a -e cpu-clock,context-switches,page-faults,cycles sleep 0.1
Namhyung Kim [Fri, 13 May 2016 06:01:01 +0000 (15:01 +0900)]
perf stat: Fix indentation of stalled backend cycle
The commit 140aeadc1fb5 ("perf stat: Abstract stat metrics printing")
changed how shadow metrics are printed, but it missed to update the
width of the stalled backend cycles event to 7.2% like others. This
resulted in misaligned output like below:
He Kuang [Thu, 12 May 2016 08:43:11 +0000 (08:43 +0000)]
perf symbols: Store vdso buildid unconditionally
When unwinding callchains on a different machine, vdso info should be
available so the unwind process won't be interrupted if address falls
into vdso region. But in most cases, the addresses of sample events are
not in vdso range, the buildid of a zero hit vdso won't be stored into
perf.data.
This patch stores vdso buildid regardless of whether the vdso is hit or
not.
Signed-off-by: He Kuang <hekuang@huawei.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ekaterina Tumanova <tumanova@linux.vnet.ibm.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1463042596-61703-3-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Andi Kleen [Thu, 5 May 2016 23:04:03 +0000 (16:04 -0700)]
perf stat: Avoid fractional digits for integer scales
When the scaling factor is a full integer don't display fractional
digits. This avoids unnecessary .00 output for topdown metrics with
scale factors.
v2: Remove redundant check.
Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1462489447-31832-7-git-send-email-andi@firstfloor.org
[ Rename 'round' to 'stat_round' as 'round' is defined in math.h,
included by this patch, and this breaks the build on ubuntu 12.04 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Linus Torvalds [Mon, 16 May 2016 23:46:03 +0000 (16:46 -0700)]
Merge branch 'x86-platform-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 platform updates from Ingo Molnar:
"The main change is the addition of SGI/UV4 support"
* 'x86-platform-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (22 commits)
x86/platform/UV: Fix incorrect nodes and pnodes for cpuless and memoryless nodes
x86/platform/UV: Remove Obsolete GRU MMR address translation
x86/platform/UV: Update physical address conversions for UV4
x86/platform/UV: Build GAM reference tables
x86/platform/UV: Support UV4 socket address changes
x86/platform/UV: Add obtaining GAM Range Table from UV BIOS
x86/platform/UV: Add UV4 addressing discovery function
x86/platform/UV: Fold blade info into per node hub info structs
x86/platform/UV: Allocate common per node hub info structs on local node
x86/platform/UV: Move blade local processor ID to the per cpu info struct
x86/platform/UV: Move scir info to the per cpu info struct
x86/platform/UV: Create per cpu info structs to replace per hub info structs
x86/platform/UV: Update MMIOH setup function to work for both UV3 and UV4
x86/platform/UV: Clean up redunduncies after merge of UV4 MMR definitions
x86/platform/UV: Add UV4 Specific MMR definitions
x86/platform/UV: Prep for UV4 MMR updates
x86/platform/UV: Add UV MMR Illegal Access Function
x86/platform/UV: Add UV4 Specific Defines
x86/platform/UV: Add UV Architecture Defines
x86/platform/UV: Add Initial UV4 definitions
...
* 'x86-boot-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (50 commits)
x86/KASLR: Clarify purpose of each get_random_long()
x86/KASLR: Add virtual address choosing function
x86/KASLR: Return earliest overlap when avoiding regions
x86/KASLR: Add 'struct slot_area' to manage random_addr slots
x86/boot: Add missing file header comments
x86/KASLR: Initialize mapping_info every time
x86/boot: Comment what finalize_identity_maps() does
x86/KASLR: Build identity mappings on demand
x86/boot: Split out kernel_ident_mapping_init()
x86/boot: Clean up indenting for asm/boot.h
x86/KASLR: Improve comments around the mem_avoid[] logic
x86/boot: Simplify pointer casting in choose_random_location()
x86/KASLR: Consolidate mem_avoid[] entries
x86/boot: Clean up pointer casting
x86/boot: Warn on future overlapping memcpy() use
x86/boot: Extract error reporting functions
x86/boot: Correctly bounds-check relocations
x86/KASLR: Clean up unused code from old 'run_size' and rename it to 'kernel_total_size'
x86/boot: Fix "run_size" calculation
x86/boot: Calculate decompression size during boot not build
...
- enhance PAT handling in enumated CPUs (Toshi Kani)
... and lots of other cleanups/fixlets"
* 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (70 commits)
x86/arch_prctl/64: Restore accidentally removed put_cpu() in ARCH_SET_GS
x86/entry/32: Remove asmlinkage_protect()
x86/entry/32: Remove GET_THREAD_INFO() from entry code
x86/entry, sched/x86: Don't save/restore EFLAGS on task switch
x86/asm/entry/32: Simplify pushes of zeroed pt_regs->REGs
selftests/x86/ldt_gdt: Test set_thread_area() deletion of an active segment
x86/tls: Synchronize segment registers in set_thread_area()
x86/asm/64: Rename thread_struct's fs and gs to fsbase and gsbase
x86/arch_prctl/64: Remove FSBASE/GSBASE < 4G optimization
x86/segments/64: When load_gs_index fails, clear the base
x86/segments/64: When loadsegment(fs, ...) fails, clear the base
x86/asm: Make asm/alternative.h safe from assembly
x86/asm: Stop depending on ptrace.h in alternative.h
x86/entry: Rename is_{ia32,x32}_task() to in_{ia32,x32}_syscall()
x86/asm: Make sure verify_cpu() has a good stack
x86/extable: Add a comment about early exception handlers
x86/msr: Set the return value to zero when native_rdmsr_safe() fails
x86/paravirt: Make "unsafe" MSR accesses unsafe even if PARAVIRT=y
x86/paravirt: Add paravirt_{read,write}_msr()
x86/msr: Carry on after a non-"safe" MSR access fails
...
Linus Torvalds [Mon, 16 May 2016 21:47:16 +0000 (14:47 -0700)]
Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler updates from Ingo Molnar:
- massive CPU hotplug rework (Thomas Gleixner)
- improve migration fairness (Peter Zijlstra)
- CPU load calculation updates/cleanups (Yuyang Du)
- cpufreq updates (Steve Muckle)
- nohz optimizations (Frederic Weisbecker)
- switch_mm() micro-optimization on x86 (Andy Lutomirski)
- ... lots of other enhancements, fixes and cleanups.
* 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (66 commits)
ARM: Hide finish_arch_post_lock_switch() from modules
sched/core: Provide a tsk_nr_cpus_allowed() helper
sched/core: Use tsk_cpus_allowed() instead of accessing ->cpus_allowed
sched/loadavg: Fix loadavg artifacts on fully idle and on fully loaded systems
sched/fair: Correct unit of load_above_capacity
sched/fair: Clean up scale confusion
sched/nohz: Fix affine unpinned timers mess
sched/fair: Fix fairness issue on migration
sched/core: Kill sched_class::task_waking to clean up the migration logic
sched/fair: Prepare to fix fairness problems on migration
sched/fair: Move record_wakee()
sched/core: Fix comment typo in wake_q_add()
sched/core: Remove unused variable
sched: Make hrtick_notifier an explicit call
sched/fair: Make ilb_notifier an explicit call
sched/hotplug: Make activate() the last hotplug step
sched/hotplug: Move migration CPU_DYING to sched_cpu_dying()
sched/migration: Move CPU_ONLINE into scheduler state
sched/migration: Move calc_load_migrate() into CPU_DYING
sched/migration: Move prepare transition to SCHED_STARTING state
...