[PATCH] x86-64: survive having no irq mapping for a vector
Occasionally the kernel has bugs that result in no irq being found for a
given cpu vector. If we acknowledge the irq the system has a good chance
of continuing even though we dropped an irq message. If we continue to
simply print a message and not acknowledge the irq the system is likely to
become non-responsive shortly there after.
AK: Fixed compilation for UP kernels
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: "Luigi Genoni" <luigi.genoni@pirelli.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Evgeniy Polyakov [Tue, 13 Feb 2007 12:26:25 +0000 (13:26 +0100)]
[PATCH] x86-64: Minor patch for compilation warning in x86_64 signal code
If DEBUG_SIG is enbaled in source code, ia32_signal.c compiles with warning
due to wrong format string. Attached patch fixes that. It is quite minor
update, since by default DEBUG_SIG is not enabled and can not be turned on
without code modification.
Roland Dreier [Tue, 13 Feb 2007 12:26:25 +0000 (13:26 +0100)]
[PATCH] x86-64: avoid warning message livelock
I've seen my box paralyzed by an endless spew of
rtc: lost some interrupts at 1024Hz.
messages on the serial console. What seems to be happening is that
something real causes an interrupt to be lost and triggers the
message. But then printing the message to the serial console (from
the hpet interrupt handler) takes more than 1/1024th of a second, and
then some more interrupts are lost, so the message triggers again....
Fix this by adding a printk_ratelimit() before printing the warning.
Signed-off-by: Roland Dreier <rolandd@cisco.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andi Kleen <ak@suse.de>
Benjamin Romer [Tue, 13 Feb 2007 12:26:25 +0000 (13:26 +0100)]
[PATCH] x86-64: update IO-APIC dest field to 8-bit for xAPIC
On the Unisys ES7000/ONE system, we encountered a problem where performing
a kexec reboot or dump on any cell other than cell 0 causes the system
timer to stop working, resulting in a hang during timer calibration in the
new kernel.
We traced the problem to one line of code in disable_IO_APIC(), which needs
to restore the timer's IO-APIC configuration before rebooting. The code is
currently using the 4-bit physical destination field, rather than using the
8-bit logical destination field, and it cuts off the upper 4 bits of the
timer's APIC ID. If we change this to use the logical destination field,
the timer works and we can kexec on the upper cells. This was tested on
two different cells (0 and 2) in an ES7000/ONE system.
For reference, the relevant Intel xAPIC spec is kept at
ftp://download.intel.com/design/chipsets/e8501/datashts/30962001.pdf,
specifically on page 334.
Signed-off-by: Benjamin M Romer <benjamin.romer@unisys.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Remove the unused kernel config option X86_XADD, which is unused in any
source or header file.
Signed-off-by: Robert P. J. Day <rpjday@mindspring.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Randy Dunlap [Tue, 13 Feb 2007 12:26:24 +0000 (13:26 +0100)]
[PATCH] i386: avoid gcc extension
setcc() in math-emu is written as a gcc extension statement expression
macro that returns a value. However, it's not used that way and it's not
needed like that, so just make it a inline function so that we
don't use an extension when it's not needed.
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Christoph Hellwig <hch@infradead.org> Cc: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
It's a hard hang that not even an NMI could punch through! Frustratingly,
adding printks or function tracing to the ACPI code made the hangs go away
...
After some time an additional detail emerged: disabling the NMI watchdog
made these occasional hangs go away.
So i spent the better part of today trying to debug this and trying out
various theories when i finally found the likely reason for the hang: if
acpi_ns_initialize_devices() executes an _INI AML method and an NMI
happens to hit that AML execution in the wrong moment, the machine would
hang. (my theory is that this must be some sort of chipset setup method
doing stores to chipset mmio registers?)
Unfortunately given the characteristics of the hang it was sheer
impossible to figure out which of the numerous AML methods is impacted
by this problem.
As a workaround i wrote an interface to disable chipset-based NMIs while
executing _INI sections - and indeed this fixed the hang. I did a
boot-loop of 100 separate reboots and none hung - while without the patch
it would hang every 5-10 attempts. Out of caution i did not touch the
nmi_watchdog=2 case (it's not related to the chipset anyway and didnt
hang).
I implemented this for both x86_64 and i686, tested the i686 laptop both
with nmi_watchdog=1 [which triggered the hangs] and nmi_watchdog=2, and
tested an Athlon64 box with the 64-bit kernel as well. Everything builds
and works with the patch applied.
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Len Brown <lenb@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Zachary Amsden [Tue, 13 Feb 2007 12:26:24 +0000 (13:26 +0100)]
[PATCH] x86-64: x86_64 - Fix FS/GS registers for VT execution
Initialize FS and GS to __KERNEL_DS as well. The actual value of them is not
important, but it is important to reload them in protected mode. At this time,
they still retain the real mode values from initial boot. VT disallows
execution of code under such conditions, which means hardware virtualization
can not be used to boot the kernel on Intel platforms, making the boot time
painfully slow.
This requires moving the GS load before the load of GS_BASE, so just move
all the segments loads there to keep them together in the code.
Jack Steiner [Tue, 13 Feb 2007 12:26:24 +0000 (13:26 +0100)]
[PATCH] x86-64: - Ignore long SMI interrupts in clock calibration code - update 1
Add failsafe mechanism to HPET/TSC clock calibration.
Signed-off-by: Jack Steiner <steiner@sgi.com>
Updated to include failsafe mechanism & additional community feedback.
Patch built on latest 2.6.20-rc4-mm1 tree.
Andreas Herrmann [Tue, 13 Feb 2007 12:26:23 +0000 (13:26 +0100)]
[PATCH] i386: fix size_or_mask and size_and_mask
mtrr: fix size_or_mask and size_and_mask
This fixes two bugs in /proc/mtrr interface:
o If physical address size crosses the 44 bit boundary
size_or_mask is evaluated wrong.
o size_and_mask limits width of physical base
address for an MTRR to be less than 44 bits.
TBD: later patch had one more change, but I think that was bogus.
TBD: need to double check
Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Signed-off-by: Andi Kleen <ak@suse.de>
Andi Kleen [Tue, 13 Feb 2007 12:26:23 +0000 (13:26 +0100)]
[PATCH] x86-64: Allow to run a program when a machine check event is detected
When a machine check event is detected (including a AMD RevF threshold
overflow event) allow to run a "trigger" program. This allows user space
to react to such events sooner.
The trigger is configured using a new trigger entry in the
machinecheck sysfs interface. It is currently shared between
all CPUs.
I also fixed the AMD threshold handler to run the machine
check polling code immediately to actually log any events
that might have caused the threshold interrupt.
Also added some documentation for the mce sysfs interface.
Jan Beulich [Tue, 13 Feb 2007 12:26:23 +0000 (13:26 +0100)]
[PATCH] x86-64: Tighten mce_amd driver MSR reads
while debugging an unrelated problem in Xen, I noticed odd reads from
non-existent MSRs. Having now found time to look why these happen, I
came up with below patch, which
- prevents accessing MCi_MISCj with j > 0 when the block pointer in
MCi_MISC0 is zero
- accesses only contiguous MCi_MISCj until a non-implemented one is
found
- doesn't touch unimplemented blocks in mce_threshold_interrupt at all
- gives names to two bits previously derived from MASK_VALID_HI (it
took me some time to understand the code without this)
The first three items, besides being apparently closer to the spec, should
namely help cutting down on the time mce_threshold_interrupt() takes.
[PATCH] x86-64: Handle 32 bit PerfMon Counter writes cleanly in x86_64 nmi_watchdog
P6 CPUs and Core/Core 2 CPUs which has 'architectural perf mon' feature,
only supports write of low 32 bits in Performance Monitoring Counters.
Bits 32..39 are sign extended based on bit 31 and bits 40..63 are reserved
and should be zero.
This patch:
Change x86_64 nmi handler to handle this case cleanly.
[PATCH] x86-64: Remove fastcall references in x86_64 code
Unlike x86, x86_64 already passes arguments in registers. The use of
regparm attribute makes no difference in produced code, and the use of
fastcall just bloats the code.
Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
Rohit Seth [Tue, 13 Feb 2007 12:26:22 +0000 (13:26 +0100)]
[PATCH] x86-64: Fix fake numa for x86_64 machines with big IO hole
This patch resolves the issue of running with numa=fake=X on kernel command
line on x86_64 machines that have big IO hole. While calculating the size
of each node now we look at the total hole size in that range.
Previously there were nodes that only had IO holes in them causing kernel
boot problems. We now use the NODE_MIN_SIZE (64MB) as the minimum size of
memory that any node must have. We reduce the number of allocated nodes if
the number of nodes specified on kernel command line results in any node
getting memory smaller than NODE_MIN_SIZE.
This change allows the extra memory to be incremented in NODE_MIN_SIZE
granule and uniformly distribute among as many nodes (called big nodes) as
possible.
[akpm@osdl.org: build fix] Signed-off-by: David Rientjes <reintjes@google.com> Signed-off-by: Paul Menage <menage@google.com> Signed-off-by: Rohit Seth <rohitseth@google.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
Ingo Molnar [Tue, 13 Feb 2007 12:26:22 +0000 (13:26 +0100)]
[PATCH] i386: improve sched_clock() on i686
Clean up sched_clock() on i686: it will use the TSC if available and falls
back to jiffies only if the user asked for it to be disabled via notsc or
the CPU calibration code didnt figure out the right cpu_khz.
This generally makes the scheduler timestamps more finegrained, on all
hardware. (the current scheduler is pretty resistant against asynchronous
sched_clock() values on different CPUs, it will allow at most up to a jiffy
of jitter.)
Also simplify sched_clock()'s check for TSC availability: propagate the
desire and ability to use the TSC into the tsc_disable flag, previously
this flag only indicated whether the notsc option was passed. This makes
the rare low-res sched_clock() codepath a single branch off a read-mostly
flag.
Stephane Eranian [Tue, 13 Feb 2007 12:26:22 +0000 (13:26 +0100)]
[PATCH] i386: add idle notifier
Add a notifier mechanism to the low level idle loop. You can register a
callback function which gets invoked on entry and exit from the low level idle
loop. The low level idle loop is defined as the polling loop, low-power call,
or the mwait instruction. Interrupts processed by the idle thread are not
considered part of the low level loop.
The notifier can be used to measure precisely how much is spent in useless
execution (or low power mode). The perfmon subsystem uses it to turn on/off
monitoring.
Vivek Goyal [Tue, 13 Feb 2007 12:26:22 +0000 (13:26 +0100)]
[PATCH] generic: Break init() in two parts to avoid MODPOST warnings
o init() is a non __init function in .text section but it calls many
functions which are in .init.text section. Hence MODPOST generates lots
of cross reference warnings on i386 if compiled with CONFIG_RELOCATABLE=y
WARNING: vmlinux - Section mismatch: reference to .init.text:smp_prepare_cpus from .text between 'init' (at offset 0xc0101049) and 'rest_init'
WARNING: vmlinux - Section mismatch: reference to .init.text:migration_init from .text between 'init' (at offset 0xc010104e) and 'rest_init'
WARNING: vmlinux - Section mismatch: reference to .init.text:spawn_ksoftirqd from .text between 'init' (at offset 0xc0101053) and 'rest_init'
o This patch breaks down init() in two parts. One part which can go
in .init.text section and can be freed and other part which has to
be non __init(init_post()). Now init() calls init_post() and init_post()
does not call any functions present in .init sections. Hence getting
rid of warnings.
Vivek Goyal [Tue, 13 Feb 2007 12:26:22 +0000 (13:26 +0100)]
[PATCH] i386: move startup_32() in text.head section
o Entry startup_32 was in .text section but it was accessing some init
data too and it prompts MODPOST to generate compilation warnings.
WARNING: vmlinux - Section mismatch: reference to .init.data:boot_params from
.text between '_text' (at offset 0xc0100029) and 'startup_32_smp'
WARNING: vmlinux - Section mismatch: reference to .init.data:boot_params from
.text between '_text' (at offset 0xc0100037) and 'startup_32_smp'
WARNING: vmlinux - Section mismatch: reference to
.init.data:init_pg_tables_end from .text between '_text' (at offset
0xc0100099) and 'startup_32_smp'
o Can't move startup_32 to .init.text as this entry point has to be at the
start of bzImage. Hence moved startup_32 to a new section .text.head and
instructed MODPOST to not to generate warnings if init data is being
accessed from .text.head section. This code has been audited.
o SMP boot up code (startup_32_smp) can go into .init.text if CPU hotplug
is not supported. Otherwise it generates more warnings
WARNING: vmlinux - Section mismatch: reference to .init.data:new_cpu_data from
.text between 'checkCPUtype' (at offset 0xc0100126) and 'is486'
WARNING: vmlinux - Section mismatch: reference to .init.data:new_cpu_data from
.text between 'checkCPUtype' (at offset 0xc0100130) and 'is486'
Zachary Amsden [Tue, 13 Feb 2007 12:26:22 +0000 (13:26 +0100)]
[PATCH] i386: Paravirt debug defaults off
Deliberate register clobber around performance critical inline code is great for
testing, bad to leave on by default. Many people ship with DEBUG_KERNEL turned
on, so stop making DEBUG_PARAVIRT default on.
Zachary Amsden [Tue, 13 Feb 2007 12:26:21 +0000 (13:26 +0100)]
[PATCH] i386: Vmi timer race
Because timer code moves around, and we might eventually move our init to a
late_time_init hook, save and restore IRQs around this code because it is
definitely not interrupt safe.
Zachary Amsden [Tue, 13 Feb 2007 12:26:21 +0000 (13:26 +0100)]
[PATCH] i386: Profile pc badness
Profile_pc was broken when using paravirtualization because the
assumption the kernel was running at CPL 0 was violated, causing
bad logic to read a random value off the stack.
The only way to be in kernel lock functions is to be in kernel
code, so validate that assumption explicitly by checking the CS
value. We don't want to be fooled by BIOS / APM segments and
try to read those stacks, so only match KERNEL_CS.
I moved some stuff in segment.h to make it prettier.
Zachary Amsden [Tue, 13 Feb 2007 12:26:21 +0000 (13:26 +0100)]
[PATCH] i386: vMI timer patches
VMI timer code. It works by taking over the local APIC clock when APIC is
configured, which requires a couple hooks into the APIC code. The backend
timer code could be commonized into the timer infrastructure, but there are
some pieces missing (stolen time, in particular), and the exact semantics of
when to do accounting for NO_IDLE need to be shared between different
hypervisors as well. So for now, VMI timer is a separate module.
[Adrian Bunk: cleanups]
Subject: VMI timer patches Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andrew Morton <akpm@osdl.org>
Zachary Amsden [Tue, 13 Feb 2007 12:26:21 +0000 (13:26 +0100)]
[PATCH] i386: SMP boot hook for paravirt
Add VMI SMP boot hook. We emulate a regular boot sequence and use the same
APIC IPI initiation, we just poke magic values to load into the CPU state when
the startup IPI is received, rather than having to jump through a real mode
trampoline.
This is all that was needed to get SMP to work.
Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andrew Morton <akpm@osdl.org>
Zachary Amsden [Tue, 13 Feb 2007 12:26:21 +0000 (13:26 +0100)]
[PATCH] i386: iOPL handling for paravirt guests
I found a clever way to make the extra IOPL switching invisible to
non-paravirt compiles - since kernel_rpl is statically defined to be zero
there, and only non-zero rpl kernel have a problem restoring IOPL, as popf
does not restore IOPL flags unless run at CPL-0.
Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andrew Morton <akpm@osdl.org>
Zachary Amsden [Tue, 13 Feb 2007 12:26:21 +0000 (13:26 +0100)]
[PATCH] i386: paravirt CPU hypercall batching mode
The VMI ROM has a mode where hypercalls can be queued and batched. This turns
out to be a significant win during context switch, but must be done at a
specific point before side effects to CPU state are visible to subsequent
instructions. This is similar to the MMU batching hooks already provided.
The same hooks could be used by the Xen backend to implement a context switch
multicall.
To explain a bit more about lazy modes in the paravirt patches, basically, the
idea is that only one of lazy CPU or MMU mode can be active at any given time.
Lazy MMU mode is similar to this lazy CPU mode, and allows for batching of
multiple PTE updates (say, inside a remap loop), but to avoid keeping some
kind of state machine about when to flush cpu or mmu updates, we just allow
one or the other to be active. Although there is no real reason a more
comprehensive scheme could not be implemented, there is also no demonstrated
need for this extra complexity.
Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andrew Morton <akpm@osdl.org>
Zachary Amsden [Tue, 13 Feb 2007 12:26:21 +0000 (13:26 +0100)]
[PATCH] MM: page allocation hooks for VMI backend
The VMI backend uses explicit page type notification to track shadow page
tables. The allocation of page table roots is especially tricky. We need to
clone the root for non-PAE mode while it is protected under the pgd lock to
correctly copy the shadow.
We don't need to allocate pgds in PAE mode, (PDPs in Intel terminology) as
they only have 4 entries, and are cached entirely by the processor, which
makes shadowing them rather simple.
For base page table level allocation, pmd_populate provides the exact hook
point we need. Also, we need to allocate pages when splitting a large page,
and we must release pages before returning the page to any free pool.
Despite being required with these slightly odd semantics for VMI, Xen also
uses these hooks to determine the exact moment when page tables are created or
released.
AK: All nops for other architectures
Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andrew Morton <akpm@osdl.org>
Catalin Marinas [Tue, 13 Feb 2007 12:26:21 +0000 (13:26 +0100)]
[PATCH] x86-64: do not always end the stack trace with ULONG_MAX
It makes more sense to end the stack trace with ULONG_MAX only if
nr_entries < max_entries. Otherwise, we lose one entry in the long stack
traces and cannot know whether the trace was complete or not.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Jan Beulich <jbeulich@novell.com> Signed-off-by: Andrew Morton <akpm@osdl.org>
Karsten Weiss [Tue, 13 Feb 2007 12:26:21 +0000 (13:26 +0100)]
[PATCH] x86-64: improved iommu documentation
- add SWIOTLB config help text
- mention Documentation/x86_64/boot-options.txt in
Documentation/kernel-parameters.txt
- remove the duplication of the iommu kernel parameter documentation.
- Better explanation of some of the iommu kernel parameter options.
- "32MB<<order" instead of "32MB^order".
- Mention the default "order" value.
- list the four existing PCI-DMA mapping implementations of arch x86_64
- group the iommu= option keywords by PCI-DMA mapping implementation.
- Distinguish iommu= option keywords from number arguments.
- Explain the meaning of DAC and SAC.
Eric Dumazet [Tue, 13 Feb 2007 12:26:21 +0000 (13:26 +0100)]
[PATCH] x86-64: get rid of ARCH_HAVE_XTIME_LOCK
ARCH_HAVE_XTIME_LOCK is used by x86_64 arch . This arch needs to place a
read only copy of xtime_lock into vsyscall page. This read only copy is
named __xtime_lock, and xtime_lock is defined in
arch/x86_64/kernel/vmlinux.lds.S as an alias. So the declaration of
xtime_lock in kernel/timer.c was guarded by ARCH_HAVE_XTIME_LOCK define,
defined to true on x86_64.
We can get same result with _attribute__((weak)) in the declaration. linker
should do the job.
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
Olivier Galibert [Tue, 13 Feb 2007 12:26:20 +0000 (13:26 +0100)]
[PATCH] mmconfig: Reserve resources but only when we're sure about them.
Put back the resource reservation as per 4c6e052adfe285ede5884e4e8c4d33af33932c13 but use it *only* when the range(s)
come from a chipset probe instead of the bios.
Olivier Galibert [Tue, 13 Feb 2007 12:26:20 +0000 (13:26 +0100)]
[PATCH] mmconfig: Detect and support the E7520 and the 945G/GZ/P/PL
It seems that the only way to reliably support mmconfig in the presence of
funky biosen is to detect the hostbridge and read where the window is mapped
from its registers. Do that for the E7520 and the 945G/GZ/P/PL for a start.
Olivier Galibert [Tue, 13 Feb 2007 12:26:20 +0000 (13:26 +0100)]
[PATCH] i386: Only call unreachable_devices() when type 1 is available.
unreachable_devices compares between the results of pci configuration accesses
through type1 and mmconfig, so it should be called only if type1 actually
works in the first place.
Convert the PDA code to use %fs rather than %gs as the segment for
per-processor data. This is because some processors show a small but
measurable performance gain for reloading a NULL segment selector (as %fs
generally is in user-space) versus a non-NULL one (as %gs generally is).
On modern processors the difference is very small, perhaps undetectable.
Some old AMD "K6 3D+" processors are noticably slower when %fs is used
rather than %gs; I have no idea why this might be, but I think they're
sufficiently rare that it doesn't matter much.
This patch also fixes the math emulator, which had not been adjusted to
match the changed struct pt_regs.
[frederik.deweerdt@gmail.com: fixit with gdb]
[mingo@elte.hu: Fix KVM too]
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Ian Campbell <Ian.Campbell@XenSource.com> Acked-by: Ingo Molnar <mingo@elte.hu> Acked-by: Zachary Amsden <zach@vmware.com> Cc: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: Frederik Deweerdt <frederik.deweerdt@gmail.com> Signed-off-by: Andrew Morton <akpm@osdl.org>
- Removed an extraneous debug message from allocate_cachealigned_map
- Changed extract_lsb_from_nodes to return 63 for the case where there was
only one memory node. The prevents the creation of the dynamic hashmap.
- Changed extract_lsb_from_nodes to use only the starting memory address of
a node. On an ES7000, our nodes overlap the starting and ending address,
meaning, that we see nodes like
00000 - 10000
10000 - 20000
But other systems have nodes whose start and end addresses do not overlap.
For example:
00000 - 0FFFF
10000 - 1FFFF
In this case, using the ending address will result in an LSB much lower
than what is possible. In this case an LSB of 1 when in reality it should
be 16.
Amul Shah [Tue, 13 Feb 2007 12:26:19 +0000 (13:26 +0100)]
[PATCH] x86-64: Allocate the NUMA hash function nodemap dynamically
Remove the statically allocated memory to NUMA node hash map in favor of a
dynamically allocated memory to node hash map (it is cache aligned).
This patch has the nice side effect in that it allows the hash map to grow
for systems with large amounts of memory (256GB - 1TB), but suffer from
having small PCI space tacked onto the boot node (which is somewhere
between 192MB to 512MB on the ES7000).
Signed-off-by: Amul Shah <amul.shah@unisys.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Rohit Seth <rohitseth@google.com> Signed-off-by: Andrew Morton <akpm@osdl.org>
Andi Kleen [Tue, 13 Feb 2007 12:26:19 +0000 (13:26 +0100)]
[PATCH] x86-64: Add __copy_from_user_nocache
This does user copies in fs write() into the page cache with write combining.
This pushes the destination out of the CPU's cache, but allows higher bandwidth
in some case.
The theory is that the page cache data is usually not touched by the
CPU again and it's better to not pollute the cache with it. Also it is a little
faster.
* master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6:
[SPARC]: Re-export saved_command_line to modules.
[SPARC64]: Increase command line size to 2048 like other arches.
[SPARC64]: We do not need ZONE_DMA.
[NETFILTER]: ip6t_mh: drop piggyback payload packet on MH packets
Regarding RFC3775, MH payload proto field should be IPPROTO_NONE. Otherwise
it must be discarded (and the receiver should send ICMP error).
We assume filter should drop such piggyback everytime to disallow slipping
through firewall rules, even the final receiver will discard it.
Signed-off-by: Masahide NAKAMURA <nakam@linux-ipv6.org> Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
Patrick McHardy [Mon, 12 Feb 2007 19:14:28 +0000 (11:14 -0800)]
[NETFILTER]: nf_conntrack: change nf_conntrack_l[34]proto_unregister to void
No caller checks the return value, and since its usually called within the
module unload path there's nothing a module could do about errors anyway,
so BUG on invalid conditions and return void.
Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
NF_CT_STAT_INC assumes rcu_read_lock in nf_hook_slow disables
preemption as well, making it legal to use __get_cpu_var without
disabling preemption manually. The assumption is not correct anymore
with preemptable RCU, additionally we need to protect against softirqs
when not holding nf_conntrack_lock.
Add NF_CT_STAT_INC_ATOMIC macro, which disables local softirqs,
and use where necessary.
Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
CONNTRACK_STAT_INC assumes rcu_read_lock in nf_hook_slow disables
preemption as well, making it legal to use __get_cpu_var without
disabling preemption manually. The assumption is not correct anymore
with preemptable RCU, additionally we need to protect against softirqs
when not holding ip_conntrack_lock.
Add CONNTRACK_STAT_INC_ATOMIC macro, which disables local softirqs,
and use where necessary.
Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
Patrick McHardy [Mon, 12 Feb 2007 19:12:57 +0000 (11:12 -0800)]
[NETFILTER]: nf_conntrack: properly use RCU API for nf_ct_protos/nf_ct_l3protos arrays
Replace preempt_{enable,disable} based RCU by proper use of the
RCU API and add missing rcu_read_lock/rcu_read_unlock calls in
all paths not obviously only used within packet process context
(nfnetlink_conntrack).
Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
Patrick McHardy [Mon, 12 Feb 2007 19:12:40 +0000 (11:12 -0800)]
[NETFILTER]: ip_conntrack: properly use RCU API for ip_ct_protos array
Replace preempt_{enable,disable} based RCU by proper use of the
RCU API and add missing rcu_read_lock/rcu_read_unlock calls in
all paths not obviously only used within packet process context
(nfnetlink_conntrack).
Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
Patrick McHardy [Mon, 12 Feb 2007 19:12:26 +0000 (11:12 -0800)]
[NETFILTER]: nf_nat: properly use RCU API for nf_nat_protos array
Replace preempt_{enable,disable} based RCU by proper use of the
RCU API and add missing rcu_read_lock/rcu_read_unlock calls in
paths used outside of packet processing context (nfnetlink_conntrack).
Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>