2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
6 config USER_STACKTRACE_SUPPORT
12 config HAVE_FUNCTION_TRACER
15 config HAVE_FUNCTION_GRAPH_TRACER
18 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
21 This gets selected when the arch tests the function_trace_stop
22 variable at the mcount call site. Otherwise, this variable
23 is tested by the called function.
25 config HAVE_DYNAMIC_FTRACE
28 config HAVE_FTRACE_MCOUNT_RECORD
31 config HAVE_HW_BRANCH_TRACER
34 config TRACER_MAX_TRACE
44 select STACKTRACE if STACKTRACE_SUPPORT
50 config FUNCTION_TRACER
51 bool "Kernel Function Tracer"
52 depends on HAVE_FUNCTION_TRACER
53 depends on DEBUG_KERNEL
56 select CONTEXT_SWITCH_TRACER
58 Enable the kernel to trace every kernel function. This is done
59 by using a compiler feature to insert a small, 5-byte No-Operation
60 instruction to the beginning of every kernel function, which NOP
61 sequence is then dynamically patched into a tracer call when
62 tracing is enabled by the administrator. If it's runtime disabled
63 (the bootup default), then the overhead of the instructions is very
64 small and not measurable even in micro-benchmarks.
66 config FUNCTION_GRAPH_TRACER
67 bool "Kernel Function Graph Tracer"
68 depends on HAVE_FUNCTION_GRAPH_TRACER
69 depends on FUNCTION_TRACER
72 Enable the kernel to trace a function at both its return
74 It's first purpose is to trace the duration of functions and
75 draw a call graph for each thread with some informations like
77 This is done by setting the current return address on the current
78 task structure into a stack of calls.
81 bool "Interrupts-off Latency Tracer"
83 depends on TRACE_IRQFLAGS_SUPPORT
84 depends on GENERIC_TIME
85 depends on DEBUG_KERNEL
88 select TRACER_MAX_TRACE
90 This option measures the time spent in irqs-off critical
91 sections, with microsecond accuracy.
93 The default measurement method is a maximum search, which is
94 disabled by default and can be runtime (re-)started
97 echo 0 > /debugfs/tracing/tracing_max_latency
99 (Note that kernel size and overhead increases with this option
100 enabled. This option and the preempt-off timing option can be
101 used together or separately.)
103 config PREEMPT_TRACER
104 bool "Preemption-off Latency Tracer"
106 depends on GENERIC_TIME
108 depends on DEBUG_KERNEL
110 select TRACER_MAX_TRACE
112 This option measures the time spent in preemption off critical
113 sections, with microsecond accuracy.
115 The default measurement method is a maximum search, which is
116 disabled by default and can be runtime (re-)started
119 echo 0 > /debugfs/tracing/tracing_max_latency
121 (Note that kernel size and overhead increases with this option
122 enabled. This option and the irqs-off timing option can be
123 used together or separately.)
125 config SYSPROF_TRACER
126 bool "Sysprof Tracer"
130 This tracer provides the trace needed by the 'Sysprof' userspace
134 bool "Scheduling Latency Tracer"
135 depends on DEBUG_KERNEL
137 select CONTEXT_SWITCH_TRACER
138 select TRACER_MAX_TRACE
140 This tracer tracks the latency of the highest priority task
141 to be scheduled in, starting from the point it has woken up.
143 config CONTEXT_SWITCH_TRACER
144 bool "Trace process context switches"
145 depends on DEBUG_KERNEL
149 This tracer gets called from the context switch and records
150 all switching of tasks.
153 bool "Trace boot initcalls"
154 depends on DEBUG_KERNEL
156 select CONTEXT_SWITCH_TRACER
158 This tracer helps developers to optimize boot times: it records
159 the timings of the initcalls and traces key events and the identity
160 of tasks that can cause boot delays, such as context-switches.
162 Its aim is to be parsed by the /scripts/bootgraph.pl tool to
163 produce pretty graphics about boot inefficiencies, giving a visual
164 representation of the delays during initcalls - but the raw
165 /debug/tracing/trace text output is readable too.
167 You must pass in ftrace=initcall to the kernel command line
168 to enable this on bootup.
170 config TRACE_BRANCH_PROFILING
171 bool "Trace likely/unlikely profiler"
172 depends on DEBUG_KERNEL
175 This tracer profiles all the the likely and unlikely macros
176 in the kernel. It will display the results in:
178 /debugfs/tracing/profile_annotated_branch
180 Note: this will add a significant overhead, only turn this
181 on if you need to profile the system's use of these macros.
185 config PROFILE_ALL_BRANCHES
186 bool "Profile all if conditionals"
187 depends on TRACE_BRANCH_PROFILING
189 This tracer profiles all branch conditions. Every if ()
190 taken in the kernel is recorded whether it hit or miss.
191 The results will be displayed in:
193 /debugfs/tracing/profile_branch
195 This configuration, when enabled, will impose a great overhead
196 on the system. This should only be enabled when the system
201 config TRACING_BRANCHES
204 Selected by tracers that will trace the likely and unlikely
205 conditions. This prevents the tracers themselves from being
206 profiled. Profiling the tracing infrastructure can only happen
207 when the likelys and unlikelys are not being traced.
210 bool "Trace likely/unlikely instances"
211 depends on TRACE_BRANCH_PROFILING
212 select TRACING_BRANCHES
214 This traces the events of likely and unlikely condition
215 calls in the kernel. The difference between this and the
216 "Trace likely/unlikely profiler" is that this is not a
217 histogram of the callers, but actually places the calling
218 events into a running trace buffer to see when and where the
219 events happened, as well as their results.
224 bool "Trace power consumption behavior"
225 depends on DEBUG_KERNEL
229 This tracer helps developers to analyze and optimize the kernels
230 power management decisions, specifically the C-state and P-state
235 bool "Trace max stack"
236 depends on HAVE_FUNCTION_TRACER
237 depends on DEBUG_KERNEL
238 select FUNCTION_TRACER
241 This special tracer records the maximum stack footprint of the
242 kernel and displays it in debugfs/tracing/stack_trace.
244 This tracer works by hooking into every function call that the
245 kernel executes, and keeping a maximum stack depth value and
246 stack-trace saved. If this is configured with DYNAMIC_FTRACE
247 then it will not have any overhead while the stack tracer
250 To enable the stack tracer on bootup, pass in 'stacktrace'
251 on the kernel command line.
253 The stack tracer can also be enabled or disabled via the
254 sysctl kernel.stack_tracer_enabled
258 config HW_BRANCH_TRACER
259 depends on HAVE_HW_BRANCH_TRACER
260 bool "Trace hw branches"
263 This tracer records all branches on the system in a circular
264 buffer giving access to the last N branches for each cpu.
267 bool "Trace SLAB allocations"
270 kmemtrace provides tracing for slab allocator functions, such as
271 kmalloc, kfree, kmem_cache_alloc, kmem_cache_free etc.. Collected
272 data is then fed to the userspace application in order to analyse
273 allocation hotspots, internal fragmentation and so on, making it
274 possible to see how well an allocator performs, as well as debug
275 and profile kernel code.
277 This requires an userspace application to use. See
278 Documentation/vm/kmemtrace.txt for more information.
280 Saying Y will make the kernel somewhat larger and slower. However,
281 if you disable kmemtrace at run-time or boot-time, the performance
282 impact is minimal (depending on the arch the kernel is built for).
286 config WORKQUEUE_TRACER
287 bool "Trace workqueues"
290 The workqueue tracer provides some statistical informations
291 about each cpu workqueue thread such as the number of the
292 works inserted and executed since their creation. It can help
293 to evaluate the amount of work each of them have to perform.
294 For example it can help a developer to decide whether he should
295 choose a per cpu workqueue instead of a singlethreaded one.
298 config DYNAMIC_FTRACE
299 bool "enable/disable ftrace tracepoints dynamically"
300 depends on FUNCTION_TRACER
301 depends on HAVE_DYNAMIC_FTRACE
302 depends on DEBUG_KERNEL
305 This option will modify all the calls to ftrace dynamically
306 (will patch them out of the binary image and replaces them
307 with a No-Op instruction) as they are called. A table is
308 created to dynamically enable them again.
310 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
311 has native performance as long as no tracing is active.
313 The changes to the code are done by a kernel thread that
314 wakes up once a second and checks to see if any ftrace calls
315 were made. If so, it runs stop_machine (stops all CPUS)
316 and modifies the code to jump over the call to ftrace.
318 config FTRACE_MCOUNT_RECORD
320 depends on DYNAMIC_FTRACE
321 depends on HAVE_FTRACE_MCOUNT_RECORD
323 config FTRACE_SELFTEST
326 config FTRACE_STARTUP_TEST
327 bool "Perform a startup test on ftrace"
328 depends on TRACING && DEBUG_KERNEL
329 select FTRACE_SELFTEST
331 This option performs a series of startup tests on ftrace. On bootup
332 a series of tests are made to verify that the tracer is
333 functioning properly. It will do tests on all the configured
337 bool "Memory mapped IO tracing"
338 depends on HAVE_MMIOTRACE_SUPPORT && DEBUG_KERNEL && PCI
341 Mmiotrace traces Memory Mapped I/O access and is meant for
342 debugging and reverse engineering. It is called from the ioremap
343 implementation and works via page faults. Tracing is disabled by
344 default and can be enabled at run-time.
346 See Documentation/tracers/mmiotrace.txt.
347 If you are not helping to develop drivers, say N.
349 config MMIOTRACE_TEST
350 tristate "Test module for mmiotrace"
351 depends on MMIOTRACE && m
353 This is a dumb module for testing mmiotrace. It is very dangerous
354 as it will write garbage to IO memory starting at a given address.
355 However, it should be safe to use on e.g. unused portion of VRAM.
357 Say N, unless you absolutely know what you are doing.