2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
6 config USER_STACKTRACE_SUPPORT
12 config HAVE_FTRACE_NMI_ENTER
15 config HAVE_FUNCTION_TRACER
18 config HAVE_FUNCTION_GRAPH_TRACER
21 config HAVE_FUNCTION_GRAPH_FP_TEST
24 An arch may pass in a unique value (frame pointer) to both the
25 entering and exiting of a function. On exit, the value is compared
26 and if it does not match, then it will panic the kernel.
28 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
31 This gets selected when the arch tests the function_trace_stop
32 variable at the mcount call site. Otherwise, this variable
33 is tested by the called function.
35 config HAVE_DYNAMIC_FTRACE
38 config HAVE_FTRACE_MCOUNT_RECORD
41 config HAVE_HW_BRANCH_TRACER
44 config HAVE_SYSCALL_TRACEPOINTS
47 config TRACER_MAX_TRACE
53 config FTRACE_NMI_ENTER
55 depends on HAVE_FTRACE_NMI_ENTER
59 select CONTEXT_SWITCH_TRACER
62 config CONTEXT_SWITCH_TRACER
65 # All tracer options should select GENERIC_TRACER. For those options that are
66 # enabled by all tracers (context switch and event tracer) they select TRACING.
67 # This allows those options to appear when no other tracer is selected. But the
68 # options do not appear when something else selects it. We need the two options
69 # GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the
70 # hidding of the automatic options options.
76 select STACKTRACE if STACKTRACE_SUPPORT
87 # Minimum requirements an architecture has to meet for us to
88 # be able to offer generic tracing facilities:
90 config TRACING_SUPPORT
92 # PPC32 has no irqflags tracing support, but it can use most of the
93 # tracers anyway, they were tested to build and work. Note that new
94 # exceptions to this list aren't welcomed, better implement the
95 # irqflags tracing for your architecture.
96 depends on TRACE_IRQFLAGS_SUPPORT || PPC32
97 depends on STACKTRACE_SUPPORT
104 default y if DEBUG_KERNEL
106 Enable the kernel tracing infrastructure.
110 config FUNCTION_TRACER
111 bool "Kernel Function Tracer"
112 depends on HAVE_FUNCTION_TRACER
115 select GENERIC_TRACER
116 select CONTEXT_SWITCH_TRACER
118 Enable the kernel to trace every kernel function. This is done
119 by using a compiler feature to insert a small, 5-byte No-Operation
120 instruction to the beginning of every kernel function, which NOP
121 sequence is then dynamically patched into a tracer call when
122 tracing is enabled by the administrator. If it's runtime disabled
123 (the bootup default), then the overhead of the instructions is very
124 small and not measurable even in micro-benchmarks.
126 config FUNCTION_GRAPH_TRACER
127 bool "Kernel Function Graph Tracer"
128 depends on HAVE_FUNCTION_GRAPH_TRACER
129 depends on FUNCTION_TRACER
130 depends on !X86_32 || !CC_OPTIMIZE_FOR_SIZE
133 Enable the kernel to trace a function at both its return
135 Its first purpose is to trace the duration of functions and
136 draw a call graph for each thread with some information like
137 the return value. This is done by setting the current return
138 address on the current task structure into a stack of calls.
141 config IRQSOFF_TRACER
142 bool "Interrupts-off Latency Tracer"
144 depends on TRACE_IRQFLAGS_SUPPORT
145 depends on GENERIC_TIME
146 select TRACE_IRQFLAGS
147 select GENERIC_TRACER
148 select TRACER_MAX_TRACE
150 This option measures the time spent in irqs-off critical
151 sections, with microsecond accuracy.
153 The default measurement method is a maximum search, which is
154 disabled by default and can be runtime (re-)started
157 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
159 (Note that kernel size and overhead increases with this option
160 enabled. This option and the preempt-off timing option can be
161 used together or separately.)
163 config PREEMPT_TRACER
164 bool "Preemption-off Latency Tracer"
166 depends on GENERIC_TIME
168 select GENERIC_TRACER
169 select TRACER_MAX_TRACE
171 This option measures the time spent in preemption off critical
172 sections, with microsecond accuracy.
174 The default measurement method is a maximum search, which is
175 disabled by default and can be runtime (re-)started
178 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
180 (Note that kernel size and overhead increases with this option
181 enabled. This option and the irqs-off timing option can be
182 used together or separately.)
184 config SYSPROF_TRACER
185 bool "Sysprof Tracer"
187 select GENERIC_TRACER
188 select CONTEXT_SWITCH_TRACER
190 This tracer provides the trace needed by the 'Sysprof' userspace
194 bool "Scheduling Latency Tracer"
195 select GENERIC_TRACER
196 select CONTEXT_SWITCH_TRACER
197 select TRACER_MAX_TRACE
199 This tracer tracks the latency of the highest priority task
200 to be scheduled in, starting from the point it has woken up.
202 config ENABLE_DEFAULT_TRACERS
203 bool "Trace process context switches and events"
204 depends on !GENERIC_TRACER
207 This tracer hooks to various trace points in the kernel
208 allowing the user to pick and choose which trace point they
209 want to trace. It also includes the sched_switch tracer plugin.
211 config FTRACE_SYSCALLS
212 bool "Trace syscalls"
213 depends on HAVE_SYSCALL_TRACEPOINTS
214 select GENERIC_TRACER
217 Basic tracer to catch the syscall entry and exit events.
220 bool "Trace boot initcalls"
221 select GENERIC_TRACER
222 select CONTEXT_SWITCH_TRACER
224 This tracer helps developers to optimize boot times: it records
225 the timings of the initcalls and traces key events and the identity
226 of tasks that can cause boot delays, such as context-switches.
228 Its aim is to be parsed by the scripts/bootgraph.pl tool to
229 produce pretty graphics about boot inefficiencies, giving a visual
230 representation of the delays during initcalls - but the raw
231 /debug/tracing/trace text output is readable too.
233 You must pass in initcall_debug and ftrace=initcall to the kernel
234 command line to enable this on bootup.
236 config TRACE_BRANCH_PROFILING
238 select GENERIC_TRACER
241 prompt "Branch Profiling"
242 default BRANCH_PROFILE_NONE
244 The branch profiling is a software profiler. It will add hooks
245 into the C conditionals to test which path a branch takes.
247 The likely/unlikely profiler only looks at the conditions that
248 are annotated with a likely or unlikely macro.
250 The "all branch" profiler will profile every if statement in the
251 kernel. This profiler will also enable the likely/unlikely
254 Either of the above profilers add a bit of overhead to the system.
255 If unsure choose "No branch profiling".
257 config BRANCH_PROFILE_NONE
258 bool "No branch profiling"
260 No branch profiling. Branch profiling adds a bit of overhead.
261 Only enable it if you want to analyse the branching behavior.
262 Otherwise keep it disabled.
264 config PROFILE_ANNOTATED_BRANCHES
265 bool "Trace likely/unlikely profiler"
266 select TRACE_BRANCH_PROFILING
268 This tracer profiles all the the likely and unlikely macros
269 in the kernel. It will display the results in:
271 /sys/kernel/debug/tracing/profile_annotated_branch
273 Note: this will add a significant overhead, only turn this
274 on if you need to profile the system's use of these macros.
276 config PROFILE_ALL_BRANCHES
277 bool "Profile all if conditionals"
278 select TRACE_BRANCH_PROFILING
280 This tracer profiles all branch conditions. Every if ()
281 taken in the kernel is recorded whether it hit or miss.
282 The results will be displayed in:
284 /sys/kernel/debug/tracing/profile_branch
286 This option also enables the likely/unlikely profiler.
288 This configuration, when enabled, will impose a great overhead
289 on the system. This should only be enabled when the system
293 config TRACING_BRANCHES
296 Selected by tracers that will trace the likely and unlikely
297 conditions. This prevents the tracers themselves from being
298 profiled. Profiling the tracing infrastructure can only happen
299 when the likelys and unlikelys are not being traced.
302 bool "Trace likely/unlikely instances"
303 depends on TRACE_BRANCH_PROFILING
304 select TRACING_BRANCHES
306 This traces the events of likely and unlikely condition
307 calls in the kernel. The difference between this and the
308 "Trace likely/unlikely profiler" is that this is not a
309 histogram of the callers, but actually places the calling
310 events into a running trace buffer to see when and where the
311 events happened, as well as their results.
316 bool "Trace power consumption behavior"
318 select GENERIC_TRACER
320 This tracer helps developers to analyze and optimize the kernels
321 power management decisions, specifically the C-state and P-state
326 bool "Trace max stack"
327 depends on HAVE_FUNCTION_TRACER
328 select FUNCTION_TRACER
332 This special tracer records the maximum stack footprint of the
333 kernel and displays it in /sys/kernel/debug/tracing/stack_trace.
335 This tracer works by hooking into every function call that the
336 kernel executes, and keeping a maximum stack depth value and
337 stack-trace saved. If this is configured with DYNAMIC_FTRACE
338 then it will not have any overhead while the stack tracer
341 To enable the stack tracer on bootup, pass in 'stacktrace'
342 on the kernel command line.
344 The stack tracer can also be enabled or disabled via the
345 sysctl kernel.stack_tracer_enabled
349 config HW_BRANCH_TRACER
350 depends on HAVE_HW_BRANCH_TRACER
351 bool "Trace hw branches"
352 select GENERIC_TRACER
354 This tracer records all branches on the system in a circular
355 buffer giving access to the last N branches for each cpu.
358 bool "Trace SLAB allocations"
359 select GENERIC_TRACER
361 kmemtrace provides tracing for slab allocator functions, such as
362 kmalloc, kfree, kmem_cache_alloc, kmem_cache_free etc.. Collected
363 data is then fed to the userspace application in order to analyse
364 allocation hotspots, internal fragmentation and so on, making it
365 possible to see how well an allocator performs, as well as debug
366 and profile kernel code.
368 This requires an userspace application to use. See
369 Documentation/trace/kmemtrace.txt for more information.
371 Saying Y will make the kernel somewhat larger and slower. However,
372 if you disable kmemtrace at run-time or boot-time, the performance
373 impact is minimal (depending on the arch the kernel is built for).
377 config WORKQUEUE_TRACER
378 bool "Trace workqueues"
379 select GENERIC_TRACER
381 The workqueue tracer provides some statistical informations
382 about each cpu workqueue thread such as the number of the
383 works inserted and executed since their creation. It can help
384 to evaluate the amount of work each of them have to perform.
385 For example it can help a developer to decide whether he should
386 choose a per cpu workqueue instead of a singlethreaded one.
388 config BLK_DEV_IO_TRACE
389 bool "Support for tracing block io actions"
395 select GENERIC_TRACER
398 Say Y here if you want to be able to trace the block layer actions
399 on a given queue. Tracing allows you to see any traffic happening
400 on a block device queue. For more information (and the userspace
401 support tools needed), fetch the blktrace tools from:
403 git://git.kernel.dk/blktrace.git
405 Tracing also is possible using the ftrace interface, e.g.:
407 echo 1 > /sys/block/sda/sda1/trace/enable
408 echo blk > /sys/kernel/debug/tracing/current_tracer
409 cat /sys/kernel/debug/tracing/trace_pipe
413 config DYNAMIC_FTRACE
414 bool "enable/disable ftrace tracepoints dynamically"
415 depends on FUNCTION_TRACER
416 depends on HAVE_DYNAMIC_FTRACE
419 This option will modify all the calls to ftrace dynamically
420 (will patch them out of the binary image and replaces them
421 with a No-Op instruction) as they are called. A table is
422 created to dynamically enable them again.
424 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
425 has native performance as long as no tracing is active.
427 The changes to the code are done by a kernel thread that
428 wakes up once a second and checks to see if any ftrace calls
429 were made. If so, it runs stop_machine (stops all CPUS)
430 and modifies the code to jump over the call to ftrace.
432 config FUNCTION_PROFILER
433 bool "Kernel function profiler"
434 depends on FUNCTION_TRACER
437 This option enables the kernel function profiler. A file is created
438 in debugfs called function_profile_enabled which defaults to zero.
439 When a 1 is echoed into this file profiling begins, and when a
440 zero is entered, profiling stops. A file in the trace_stats
441 directory called functions, that show the list of functions that
442 have been hit and their counters.
446 config FTRACE_MCOUNT_RECORD
448 depends on DYNAMIC_FTRACE
449 depends on HAVE_FTRACE_MCOUNT_RECORD
451 config FTRACE_SELFTEST
454 config FTRACE_STARTUP_TEST
455 bool "Perform a startup test on ftrace"
456 depends on GENERIC_TRACER
457 select FTRACE_SELFTEST
459 This option performs a series of startup tests on ftrace. On bootup
460 a series of tests are made to verify that the tracer is
461 functioning properly. It will do tests on all the configured
465 bool "Memory mapped IO tracing"
466 depends on HAVE_MMIOTRACE_SUPPORT && PCI
467 select GENERIC_TRACER
469 Mmiotrace traces Memory Mapped I/O access and is meant for
470 debugging and reverse engineering. It is called from the ioremap
471 implementation and works via page faults. Tracing is disabled by
472 default and can be enabled at run-time.
474 See Documentation/trace/mmiotrace.txt.
475 If you are not helping to develop drivers, say N.
477 config MMIOTRACE_TEST
478 tristate "Test module for mmiotrace"
479 depends on MMIOTRACE && m
481 This is a dumb module for testing mmiotrace. It is very dangerous
482 as it will write garbage to IO memory starting at a given address.
483 However, it should be safe to use on e.g. unused portion of VRAM.
485 Say N, unless you absolutely know what you are doing.
487 config RING_BUFFER_BENCHMARK
488 tristate "Ring buffer benchmark stress tester"
489 depends on RING_BUFFER
491 This option creates a test to stress the ring buffer and bench mark it.
492 It creates its own ring buffer such that it will not interfer with
493 any other users of the ring buffer (such as ftrace). It then creates
494 a producer and consumer that will run for 10 seconds and sleep for
495 10 seconds. Each interval it will print out the number of events
496 it recorded and give a rough estimate of how long each iteration took.
498 It does not disable interrupts or raise its priority, so it may be
499 affected by processes that are running.
505 endif # TRACING_SUPPORT