So far, the inlined nodes where only reversed when we built perf
against libbfd. If that was not available, the addr2line fallback
code path was missing the inline_list__reverse call.
Now we always add the nodes in the correct order within
inline_list__append. This removes the need to reverse the list
and also ensures that all callers construct the list in the right
order.
Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Yao Jin <yao.jin@linux.intel.com>
Cc: kernel-team@lge.com
Link: http://lkml.kernel.org/r/20170524062129.32529-6-namhyung@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
- list_add_tail(&ilist->list, &node->val);
+ if (callchain_param.order == ORDER_CALLEE)
+ list_add_tail(&ilist->list, &node->val);
+ else
+ list_add(&ilist->list, &node->val);
#define MAX_INLINE_NEST 1024
#define MAX_INLINE_NEST 1024
-static void inline_list__reverse(struct inline_node *node)
-{
- struct inline_list *ilist, *n;
-
- list_for_each_entry_safe_reverse(ilist, n, &node->val, list)
- list_move_tail(&ilist->list, &node->val);
-}
-
static int addr2line(const char *dso_name, u64 addr,
char **file, unsigned int *line, struct dso *dso,
bool unwind_inlines, struct inline_node *node)
static int addr2line(const char *dso_name, u64 addr,
char **file, unsigned int *line, struct dso *dso,
bool unwind_inlines, struct inline_node *node)
-
- if ((node != NULL) &&
- (callchain_param.order != ORDER_CALLEE)) {
- inline_list__reverse(node);
- }