1 What: /sys/devices/system/cpu/
3 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
5 A collection of both global and individual CPU attributes
7 Individual CPU attributes are contained in subdirectories
8 named by the kernel's logical CPU number, e.g.:
10 /sys/devices/system/cpu/cpu#/
12 What: /sys/devices/system/cpu/kernel_max
13 /sys/devices/system/cpu/offline
14 /sys/devices/system/cpu/online
15 /sys/devices/system/cpu/possible
16 /sys/devices/system/cpu/present
18 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
19 Description: CPU topology files that describe kernel limits related to
22 kernel_max: the maximum cpu index allowed by the kernel
25 offline: cpus that are not online because they have been
26 HOTPLUGGED off or exceed the limit of cpus allowed by the
27 kernel configuration (kernel_max above).
29 online: cpus that are online and being scheduled.
31 possible: cpus that have been allocated resources and can be
32 brought online if they are present.
34 present: cpus that have been identified as being present in
37 See Documentation/cputopology.txt for more information.
40 What: /sys/devices/system/cpu/probe
41 /sys/devices/system/cpu/release
43 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
44 Description: Dynamic addition and removal of CPU's. This is not hotplug
45 removal, this is meant complete removal/addition of the CPU
48 probe: writes to this file will dynamically add a CPU to the
49 system. Information written to the file to add CPU's is
50 architecture specific.
52 release: writes to this file dynamically remove a CPU from
53 the system. Information writtento the file to remove CPU's
54 is architecture specific.
56 What: /sys/devices/system/cpu/cpu#/node
58 Contact: Linux memory management mailing list <linux-mm@kvack.org>
59 Description: Discover NUMA node a CPU belongs to
61 When CONFIG_NUMA is enabled, a symbolic link that points
62 to the corresponding NUMA node directory.
64 For example, the following symlink is created for cpu42
67 /sys/devices/system/cpu/cpu42/node2 -> ../../node/node2
70 What: /sys/devices/system/cpu/cpu#/node
72 Contact: Linux memory management mailing list <linux-mm@kvack.org>
73 Description: Discover NUMA node a CPU belongs to
75 When CONFIG_NUMA is enabled, a symbolic link that points
76 to the corresponding NUMA node directory.
78 For example, the following symlink is created for cpu42
81 /sys/devices/system/cpu/cpu42/node2 -> ../../node/node2
84 What: /sys/devices/system/cpu/cpu#/topology/core_id
85 /sys/devices/system/cpu/cpu#/topology/core_siblings
86 /sys/devices/system/cpu/cpu#/topology/core_siblings_list
87 /sys/devices/system/cpu/cpu#/topology/physical_package_id
88 /sys/devices/system/cpu/cpu#/topology/thread_siblings
89 /sys/devices/system/cpu/cpu#/topology/thread_siblings_list
91 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
92 Description: CPU topology files that describe a logical CPU's relationship
93 to other cores and threads in the same physical package.
95 One cpu# directory is created per logical CPU in the system,
96 e.g. /sys/devices/system/cpu/cpu42/.
98 Briefly, the files above are:
100 core_id: the CPU core ID of cpu#. Typically it is the
101 hardware platform's identifier (rather than the kernel's).
102 The actual value is architecture and platform dependent.
104 core_siblings: internal kernel map of cpu#'s hardware threads
105 within the same physical_package_id.
107 core_siblings_list: human-readable list of the logical CPU
108 numbers within the same physical_package_id as cpu#.
110 physical_package_id: physical package id of cpu#. Typically
111 corresponds to a physical socket number, but the actual value
112 is architecture and platform dependent.
114 thread_siblings: internel kernel map of cpu#'s hardware
115 threads within the same core as cpu#
117 thread_siblings_list: human-readable list of cpu#'s hardware
118 threads within the same core as cpu#
120 See Documentation/cputopology.txt for more information.
123 What: /sys/devices/system/cpu/cpuidle/current_driver
124 /sys/devices/system/cpu/cpuidle/current_governer_ro
126 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
127 Description: Discover cpuidle policy and mechanism
129 Various CPUs today support multiple idle levels that are
130 differentiated by varying exit latencies and power
131 consumption during idle.
133 Idle policy (governor) is differentiated from idle mechanism
136 current_driver: displays current idle mechanism
138 current_governor_ro: displays current idle policy
140 See files in Documentation/cpuidle/ for more information.
143 What: /sys/devices/system/cpu/cpu#/cpufreq/*
144 Date: pre-git history
145 Contact: cpufreq@vger.kernel.org
146 Description: Discover and change clock speed of CPUs
148 Clock scaling allows you to change the clock speed of the
149 CPUs on the fly. This is a nice method to save battery
150 power, because the lower the clock speed, the less power
153 There are many knobs to tweak in this directory.
155 See files in Documentation/cpu-freq/ for more information.
157 In particular, read Documentation/cpu-freq/user-guide.txt
158 to learn how to control the knobs.
161 What: /sys/devices/system/cpu/cpu*/cache/index3/cache_disable_{0,1}
163 KernelVersion: 2.6.27
164 Contact: discuss@x86-64.org
165 Description: Disable L3 cache indices
167 These files exist in every CPU's cache/index3 directory. Each
168 cache_disable_{0,1} file corresponds to one disable slot which
169 can be used to disable a cache index. Reading from these files
170 on a processor with this functionality will return the currently
171 disabled index for that node. There is one L3 structure per
172 node, or per internal node on MCM machines. Writing a valid
173 index to one of these files will cause the specificed cache
174 index to be disabled.
176 All AMD processors with L3 caches provide this functionality.
177 For details, see BKDGs at
178 http://developer.amd.com/documentation/guides/Pages/default.aspx