2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
6 config USER_STACKTRACE_SUPPORT
12 config HAVE_FUNCTION_TRACER
15 config HAVE_FUNCTION_GRAPH_TRACER
18 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
21 This gets selected when the arch tests the function_trace_stop
22 variable at the mcount call site. Otherwise, this variable
23 is tested by the called function.
25 config HAVE_DYNAMIC_FTRACE
28 config HAVE_FTRACE_MCOUNT_RECORD
31 config HAVE_HW_BRANCH_TRACER
34 config TRACER_MAX_TRACE
44 select STACKTRACE if STACKTRACE_SUPPORT
50 config FUNCTION_TRACER
51 bool "Kernel Function Tracer"
52 depends on HAVE_FUNCTION_TRACER
53 depends on DEBUG_KERNEL
56 select CONTEXT_SWITCH_TRACER
58 Enable the kernel to trace every kernel function. This is done
59 by using a compiler feature to insert a small, 5-byte No-Operation
60 instruction to the beginning of every kernel function, which NOP
61 sequence is then dynamically patched into a tracer call when
62 tracing is enabled by the administrator. If it's runtime disabled
63 (the bootup default), then the overhead of the instructions is very
64 small and not measurable even in micro-benchmarks.
66 config FUNCTION_GRAPH_TRACER
67 bool "Kernel Function Graph Tracer"
68 depends on HAVE_FUNCTION_GRAPH_TRACER
69 depends on FUNCTION_TRACER
71 Enable the kernel to trace a function at both its return
73 It's first purpose is to trace the duration of functions and
74 draw a call graph for each thread with some informations like
76 This is done by setting the current return address on the current
77 task structure into a stack of calls.
80 bool "Interrupts-off Latency Tracer"
82 depends on TRACE_IRQFLAGS_SUPPORT
83 depends on GENERIC_TIME
84 depends on DEBUG_KERNEL
87 select TRACER_MAX_TRACE
89 This option measures the time spent in irqs-off critical
90 sections, with microsecond accuracy.
92 The default measurement method is a maximum search, which is
93 disabled by default and can be runtime (re-)started
96 echo 0 > /debugfs/tracing/tracing_max_latency
98 (Note that kernel size and overhead increases with this option
99 enabled. This option and the preempt-off timing option can be
100 used together or separately.)
102 config PREEMPT_TRACER
103 bool "Preemption-off Latency Tracer"
105 depends on GENERIC_TIME
107 depends on DEBUG_KERNEL
109 select TRACER_MAX_TRACE
111 This option measures the time spent in preemption off critical
112 sections, with microsecond accuracy.
114 The default measurement method is a maximum search, which is
115 disabled by default and can be runtime (re-)started
118 echo 0 > /debugfs/tracing/tracing_max_latency
120 (Note that kernel size and overhead increases with this option
121 enabled. This option and the irqs-off timing option can be
122 used together or separately.)
124 config SYSPROF_TRACER
125 bool "Sysprof Tracer"
129 This tracer provides the trace needed by the 'Sysprof' userspace
133 bool "Scheduling Latency Tracer"
134 depends on DEBUG_KERNEL
136 select CONTEXT_SWITCH_TRACER
137 select TRACER_MAX_TRACE
139 This tracer tracks the latency of the highest priority task
140 to be scheduled in, starting from the point it has woken up.
142 config CONTEXT_SWITCH_TRACER
143 bool "Trace process context switches"
144 depends on DEBUG_KERNEL
148 This tracer gets called from the context switch and records
149 all switching of tasks.
152 bool "Trace boot initcalls"
153 depends on DEBUG_KERNEL
155 select CONTEXT_SWITCH_TRACER
157 This tracer helps developers to optimize boot times: it records
158 the timings of the initcalls and traces key events and the identity
159 of tasks that can cause boot delays, such as context-switches.
161 Its aim is to be parsed by the /scripts/bootgraph.pl tool to
162 produce pretty graphics about boot inefficiencies, giving a visual
163 representation of the delays during initcalls - but the raw
164 /debug/tracing/trace text output is readable too.
166 ( Note that tracing self tests can't be enabled if this tracer is
167 selected, because the self-tests are an initcall as well and that
168 would invalidate the boot trace. )
170 config TRACE_BRANCH_PROFILING
171 bool "Trace likely/unlikely profiler"
172 depends on DEBUG_KERNEL
175 This tracer profiles all the the likely and unlikely macros
176 in the kernel. It will display the results in:
178 /debugfs/tracing/profile_annotated_branch
180 Note: this will add a significant overhead, only turn this
181 on if you need to profile the system's use of these macros.
185 config PROFILE_ALL_BRANCHES
186 bool "Profile all if conditionals"
187 depends on TRACE_BRANCH_PROFILING
189 This tracer profiles all branch conditions. Every if ()
190 taken in the kernel is recorded whether it hit or miss.
191 The results will be displayed in:
193 /debugfs/tracing/profile_branch
195 This configuration, when enabled, will impose a great overhead
196 on the system. This should only be enabled when the system
201 config TRACING_BRANCHES
204 Selected by tracers that will trace the likely and unlikely
205 conditions. This prevents the tracers themselves from being
206 profiled. Profiling the tracing infrastructure can only happen
207 when the likelys and unlikelys are not being traced.
210 bool "Trace likely/unlikely instances"
211 depends on TRACE_BRANCH_PROFILING
212 select TRACING_BRANCHES
214 This traces the events of likely and unlikely condition
215 calls in the kernel. The difference between this and the
216 "Trace likely/unlikely profiler" is that this is not a
217 histogram of the callers, but actually places the calling
218 events into a running trace buffer to see when and where the
219 events happened, as well as their results.
224 bool "Trace max stack"
225 depends on HAVE_FUNCTION_TRACER
226 depends on DEBUG_KERNEL
227 select FUNCTION_TRACER
230 This special tracer records the maximum stack footprint of the
231 kernel and displays it in debugfs/tracing/stack_trace.
233 This tracer works by hooking into every function call that the
234 kernel executes, and keeping a maximum stack depth value and
235 stack-trace saved. Because this logic has to execute in every
236 kernel function, all the time, this option can slow down the
237 kernel measurably and is generally intended for kernel
243 depends on HAVE_HW_BRANCH_TRACER
244 bool "Trace branches"
247 This tracer records all branches on the system in a circular
248 buffer giving access to the last N branches for each cpu.
250 config DYNAMIC_FTRACE
251 bool "enable/disable ftrace tracepoints dynamically"
252 depends on FUNCTION_TRACER
253 depends on HAVE_DYNAMIC_FTRACE
254 depends on DEBUG_KERNEL
257 This option will modify all the calls to ftrace dynamically
258 (will patch them out of the binary image and replaces them
259 with a No-Op instruction) as they are called. A table is
260 created to dynamically enable them again.
262 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
263 has native performance as long as no tracing is active.
265 The changes to the code are done by a kernel thread that
266 wakes up once a second and checks to see if any ftrace calls
267 were made. If so, it runs stop_machine (stops all CPUS)
268 and modifies the code to jump over the call to ftrace.
270 config FTRACE_MCOUNT_RECORD
272 depends on DYNAMIC_FTRACE
273 depends on HAVE_FTRACE_MCOUNT_RECORD
275 config FTRACE_SELFTEST
278 config FTRACE_STARTUP_TEST
279 bool "Perform a startup test on ftrace"
280 depends on TRACING && DEBUG_KERNEL && !BOOT_TRACER
281 select FTRACE_SELFTEST
283 This option performs a series of startup tests on ftrace. On bootup
284 a series of tests are made to verify that the tracer is
285 functioning properly. It will do tests on all the configured