Make sure crypt_stat->flags is protected with a lock in ecryptfs_open().
Signed-off-by: Michael Halcrow <mhalcrow@us.ibm.com> Cc: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
When the Linux kernel is compiled with CONFIG_DEBUG_SHIRQ=y,
the Soundblaster Audigy2 ZS Notebook PCMCIA card causes the
system hang during boot (udev stage) or when the card is hot-plug.
The CONFIG_DEBUG_SHIRQ flag is by default 'y' with all Fedora
kernels since 2.6.23. The problem was reported as
https://bugzilla.redhat.com/show_bug.cgi?id=326411
The issue was hunted down to the snd_emu10k1_create() routine:
The early access to I/O port in the interrupt handler causes
the freeze. Obviously it is necessary to init the I/O ports
before accessing them. This patch moves the registration of
the irq handler after the initialization of the I/O ports.
Signed-off-by: Jaroslav Franek <jarin.franek@post.cz> Acked-by: James Courtier-Dutton <James@superbug.co.uk> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
The auto-config mode of Realtek ALC codecs has a bug since 2.6.25
that it cannot resume properly. The problem was the wrong assignment
of init_hook that overrides the whole initialization.
This fixes a context assertion in ssb that makes b44 print
out warnings on resume.
This fixes the following kernel oops:
http://www.kerneloops.org/oops.php?number=12732
http://www.kerneloops.org/oops.php?number=11410
Signed-off-by: Michael Buesch <mb@bu3sch.de> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Packet sending is driven by two flags, tx_ready and tx_queued.
It was possible, that there were queued data for sending and
hardware was flagged as blocked but in fact it was not.
The tx_queued was indicator but should be really a counter else
first fragmented packet resets tx_queued flag, but there may be
pending packets which do not get sent.
New semantics:
tx_ready - set, if hw is ready to send packet, no packet is being
transferred right now
set the flag right at the place where data are copied
into hw memory and not earlier without checking if it
was succesful
tx_queued - count of enqueued packets, including fragments
Tested-by: Michal Rokos <michal.rokos@gmail.com> Signed-off-by: David Sterba <dsterba@suse.cz> Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
This fixes a kernel crash on rmmod, in the case where the controller
was restarted before doing the rmmod.
Signed-off-by: Michael Buesch <mb@bu3sch.de> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
OGAWA Hirofumi and Fede have reported rare pmd_ERROR messages:
mm/memory.c:127: bad pmd ffff810000207xxx(9090909090909090).
Initialization's cleanup_highmap was leaving alignment filler
behind in the pmd for MODULES_VADDR: when vmalloc's guard page
would occupy a new page table, it's not allocated, and then
module unload's vfree hits the bad 9090 pmd entry left over.
If cpu specific cpufreq driver(i.e. longrun) has "setpolicy" function,
governor object isn't set into cpufreq_policy object at "__cpufreq_set_policy"
function in driver/cpufreq/cpufreq.c .
This causes a null object access at "store_scaling_setspeed" and
"show_scaling_setspeed" function in driver/cpufreq/cpufreq.c when reading or
writing through /sys interface (ex. cat
/sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed)
Source code out there hard-codes a notion of what the
_LINUX_CAPABILITY_VERSION #define means in terms of the semantics of the
raw capability system calls capget() and capset(). Its unfortunate, but
true.
Since the confusing header file has been in a released kernel, there is
software that is erroneously using 64-bit capabilities with the semantics
of 32-bit compatibilities. These recently compiled programs may suffer
corruption of their memory when sys_getcap() overwrites more memory than
they are coded to expect, and the raising of added capabilities when using
sys_capset().
As such, this patch does a number of things to clean up the situation
for all. It
1. forces the _LINUX_CAPABILITY_VERSION define to always retain its
legacy value.
2. adopts a new #define strategy for the kernel's internal
implementation of the preferred magic.
3. deprecates v2 capability magic in favor of a new (v3) magic
number. The functionality of v3 is entirely equivalent to v2,
the only difference being that the v2 magic causes the kernel
to log a "deprecated" warning so the admin can find applications
that may be using v2 inappropriately.
[User space code continues to be encouraged to use the libcap API which
protects the application from details like this. libcap-2.10 is the first
to support v3 capabilities.]
Fixes issue reported in https://bugzilla.redhat.com/show_bug.cgi?id=447518.
Thanks to Bojan Smojver for the report.
[akpm@linux-foundation.org: s/depreciate/deprecate/g]
[akpm@linux-foundation.org: be robust about put_user size]
[akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Andrew G. Morgan <morgan@kernel.org> Cc: Serge E. Hallyn <serue@us.ibm.com> Cc: Bojan Smojver <bojan@rexursive.com> Cc: stable@kernel.org Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
During the initial array synchronization process there is a window between
when a prexor operation is scheduled to a specific stripe and when it
completes for a sync_request to be scheduled to the same stripe. When
this happens the prexor completes and the stripe is unconditionally marked
"insync", effectively canceling the sync_request for the stripe. Prior to
2.6.23 this was not a problem because the prexor operation was done under
sh->lock. The effect in older kernels being that the prexor would still
erroneously mark the stripe "insync", but sync_request would be held off
and re-mark the stripe as "!in_sync".
Change the write completion logic to not mark the stripe "in_sync" if a
prexor was performed. The effect of the change is to sometimes not set
STRIPE_INSYNC. The worst this can do is cause the resync to stall waiting
for STRIPE_INSYNC to be set. If this were happening, then STRIPE_SYNCING
would be set and handle_issuing_new_read_requests would cause all
available blocks to eventually be read, at which point prexor would never
be used on that stripe any more and STRIPE_INSYNC would eventually be set.
echo repair > /sys/block/mdN/md/sync_action will correct arrays that may
have lost this race.
Cc: <stable@kernel.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[chrisw: backport to 2.6.25.5] Signed-off-by: Chris Wright <chrisw@sous-sol.org>
If an array was created with --assume-clean we will oops when trying to
set ->resync_max.
Fix this by initializing ->recovery_wait in mddev_find.
Cc: <stable@kernel.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
If a block is computed (rather than read) then a check/repair operation
may be lead to believe that the data on disk is correct, when infact it
isn't. So only compute blocks for failed devices.
This issue has been around since at least 2.6.12, but has become harder to
hit in recent kernels since most reads bypass the cache.
echo repair > /sys/block/mdN/md/sync_action will set the parity blocks to the
correct state.
Cc: <stable@kernel.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
The page decrypt calls in ecryptfs_write() are both pointless and buggy.
Pointless because ecryptfs_get_locked_page() has already brought the page
up to date, and buggy because prior mmap writes will just be blown away by
the decrypt call.
This patch also removes the declaration of a now-nonexistent function
ecryptfs_write_zeros().
Thanks to Eric Sandeen and David Kleikamp for helping to track this
down.
Eric said:
fsx w/ mmap dies quickly ( < 100 ops) without this, and survives
nicely (to millions of ops+) with it in place.
Signed-off-by: Michael Halcrow <mhalcrow@us.ibm.com> Cc: Eric Sandeen <sandeen@redhat.com> Cc: Dave Kleikamp <shaggy@austin.ibm.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[chrisw: backport to 2.6.25.5] Signed-off-by: Chris Wright <chrisw@sous-sol.org>
The check in sys_brk() on minimum value the brk might have must take
CONFIG_COMPAT_BRK setting into account. When this option is turned on
(i.e. we support ancient legacy binaries, e.g. libc5-linked stuff), the
lower bound on brk value is mm->end_code, otherwise the brk start is
allowed to be arbitrarily shifted.
Fix a bug in add_to_pagemap. Previously, since pm->out was a char *,
put_user was only copying 1 byte of every PFN, resulting in the top 7
bytes of each PFN not being copied. By requiring that reads be a multiple
of 8 bytes, I can make pm->out and pm->end u64*s instead of char*s, which
makes put_user work properly, and also simplifies the logic in
add_to_pagemap a bit.
[akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Thomas Tuttle <ttuttle@google.com> Cc: Matt Mackall <mpm@selenic.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
[NET]: Make /proc/net a symlink on /proc/self/net (v3)
introduced a /proc/self/net directory without bumping the corresponding
link count for /proc/self.
This patch replaces the static link count initializations with a call that
counts the number of directory entries in the given pid_entry table
whenever it is instantiated, and thus relieves the burden of manually
keeping the two in sync.
[akpm@linux-foundation.org: cleanup] Acked-by: Eric W. Biederman <ebiederm@xmission.com> Cc: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: Vegard Nossum <vegard.nossum@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
The d_instantiate hook for Smack can hang on the root inode of a
filesystem if the file system code has not really done all the set-up.
Fuse is known to encounter this problem.
This change detects an attempt to instantiate a root inode and addresses
it early in the processing, before any attempt is made to do something
that might hang.
Signed-off-by: Casey Schaufler <casey@schaufler-ca.com> Tested-by: Luiz Fernando N. Capitulino <lcapitulino@mandriva.com.br> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
When using 4+ GB RAM and SWIOTLB is active, the driver corrupts
memory by writing an skb after the relevant DMA page has been
unmapped. Although this doesn't happen when *not* using bounce
buffers, clearing the pointer to the DMA page after unmapping
it fixes the problem.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Jay Cliburn <jacliburn@bellsouth.net> Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
[jacliburn@bellsouth.net: backport to 2.6.25.4] Signed-off-by: Chris Wright <chrisw@sous-sol.org>
According to this and another similar lockdep report inet_fragment
locks are taken from nf_ct_frag6_gather() with softirqs enabled, but
these locks are mainly used in softirq context, so disabling BHs is
necessary.
Reported-and-tested-by: Eric Sesterhenn <snakebyte@gmx.de> Signed-off-by: Jarek Poplawski <jarkao2@gmail.com> Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
In xt_connlimit match module, the counter of an IP is decreased when
the TCP packet is go through the chain with ip_conntrack state TW.
Well, it's very natural that the server and client close the socket
with FIN packet. But when the client/server close the socket with RST
packet(using so_linger), the counter for this connection still exsit.
The following patch can fix it which is based on linux-2.6.25.4
Signed-off-by: Dong Wei <dwei.zh@gmail.com> Acked-by: Jan Engelhardt <jengelh@medozas.de> Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: Alexey Dobriyan <adobriyan@parallels.com> Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Jürgen Mell reported an FPU state corruption bug under CONFIG_PREEMPT,
and bisected it to commit v2.6.19-1363-gacc2076, "i386: add sleazy FPU
optimization".
Add tsk_used_math() checks to prevent calling math_state_restore()
which can sleep in the case of !tsk_used_math(). This prevents
making a blocking call in __switch_to().
Apparently "fpu_counter > 5" check is not enough, as in some signal handling
and fork/exec scenarios, fpu_counter > 5 and !tsk_used_math() is possible.
It's a side effect though. This is the failing scenario:
process 'A' in save_i387_ia32() just after clear_used_math()
Got an interrupt and pre-empted out.
At the next context switch to process 'A' again, kernel tries to restore
the math state proactively and sees a fpu_counter > 0 and !tsk_used_math()
This results in init_fpu() during the __switch_to()'s math_state_restore()
And resulting in fpu corruption which will be saved/restored
(save_i387_fxsave and restore_i387_fxsave) during the remaining
part of the signal handling after the context switch.
We checked the hardware freq with OS cached freq value in get_cur_freqon_cpu().
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Thomas Renninger <trenn@suse.de> Signed-off-by: Dave Jones <davej@redhat.com> Cc: Thomas Renninger <trenn@suse.de> Cc: "Anthony L. Awtrey" <tony@awtrey.com>
[chrisw: backport to 2.6.25.4] Signed-off-by: Chris Wright <chrisw@sous-sol.org>
[jkosina@suse.cz: Needed to fix apple aluminium keyboard regression]
Since 2.6.25 the HID_QUIRK_APPLE_HAS_FN quirk is enabled even for
non-laptop Apple keyboards of the Aluminium series. The USB version of
these don't need Numlock emulation, like the laptop (and Aluminium
Wireless) do, as they have a proper keypad.
This patch splits the Numlock emulation for Apple keyboards in a
different quirk flag, so that it can be enabled for all the keyboards
but the Aluminium USB ones.
If the Numlock emulation is enabled for Aluminium USB keyboards, the
JKL and UIO keys become the numeric pad, and the rest of the keyboard
is disabled, included the key used to disable Numlock.
Additionally, these keyboard should not have a Numlock at all, as the
Numlock key is instead replaced by the 'Clear' key as usual for Apple
USB keyboards.
Signed-off-by: Diego 'Flameeyes' Petteno <flameeyes@gmail.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Using iptables 1.3.8 with kernel 2.6.25, rules which include '-m
iprange' don't automatically pull in xt_iprange module. Below patch
adds module aliases to fix that. Patch against latest -git, but seems
like a good candidate for -stable also.
Signed-off-by: Phil Oester <kernel@linuxace.com> Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
This fixes the bug that the I/O buffer is not freed at the driver removal.
Signed-off-by: Masakazu Mokuno <mokuno@sm.sony.co.jp> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Reported-by: Laurence Withers <l@lwithers.me.uk> Cc: Gary Hade <garyhade@us.ibm.com> Cc: Greg KH <greg@kroah.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: "Jun'ichi Nomura" <j-nomura@ce.jp.nec.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[cebbert@redhat.com: backport, remove first hunk to make port easier]
[chrisw@sous-sol.org: add back first hunk] Signed-off-by: Chris Wright <chrisw@sous-sol.org>
CR4 manipulation is not protected against interrupts and preemption,
but KVM uses smp_function_call to manipulate the X86_CR4_VMXE bit
either from the CPU hotplug code or from the kvm_init call.
We need to protect the CR4 manipulation from both interrupts and
preemption.
Original bug report: http://lkml.org/lkml/2008/5/7/48
Bugzilla entry: http://bugzilla.kernel.org/show_bug.cgi?id=10642
This is not a regression from 2.6.25, it's a long standing and hard to
trigger bug.
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
When we have multiple buffers in a single page for a blocksize == pagesize
filesystem we might overwrite the page contents if two callers hit it
shortly after each other. To prevent that we need to keep the page locked
until I/O is completed and the page marked uptodate.
Thanks to Eric Sandeen for triaging this bug and finding a reproducible
testcase and Dave Chinner for additional advice.
This should fix kernel.org bz #10421.
Tested-by: Eric Sandeen <sandeen@sandeen.net>
SGI-PV: 981813
SGI-Modid: xfs-linux-melb:xfs-kern:31173a
Signed-off-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: David Chinner <dgc@sgi.com> Signed-off-by: Lachlan McIlroy <lachlan@sgi.com> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
tsc_enabled is set to 0 from the command line switch "notsc" and from
the mark_tsc_unstable code. Seperate those functionalities and replace
tsc_enable with tsc_disable. This makes also the native_sched_clock()
decision when to use TSC understandable.
Preparatory patch to solve the sched_clock() issue on 32 bit.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
The current tsc_init() clears the TSC feature bit if the TSC khz
cannot be calculated, causing us to panic in
arch/x86/kernel/cpu/bugs.c check_config(). We should simply mark it
unstable.
Frankly, someone should take an axe to this code. mark_tsc_unstable()
not only marks it unstable, but sets tsc_enabled to 0, which seems
redundant but is actually important here because means it won't be
used by sched_clock() either. Perhaps a tristate enum "UNUSABLE,
UNSTABLE, OK" would be clearer, and separate mark_tsc_unstable() and
mark_tsc_broken() functions?
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
When the TSC is calibrated against the PIT due to the nonavailability
of PMTIMER/HPET or due to SMI interference then the setup of the per
CPU cyc2ns variables is skipped. This is unlikely to happen but it
would definitely render sched_clock() unusable.
We saw a kernel oops in our regression testing when a multicast "join
finish" occurred just after the interface was -- this is
<https://bugs.openfabrics.org/show_bug.cgi?id=1040>. The test
randomly causes the HCA physical port to go down then up.
The cause of this is that ipoib_mcast_join_finish() processing happen
just after ipoib_mcast_dev_flush() was invoked (in which case the
broadcast pointer is NULL). This patch tests for and handles the case
where priv->broadcast is NULL.
Cc: <stable@kernel.org> Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
This is a slight change in the namespace cgroup subsystem api.
The change is that previously when cgroup_clone() was called (currently
only from the unshare path in ns_proxy cgroup, you'd get a new group named
"node_$pid" whereas now you'll get a group named after just your pid.)
The only users who would notice it are those who are using the ns_proxy
cgroup subsystem to auto-create cgroups when namespaces are unshared -
something of an experimental feature, which I think really needs more
complete container/namespace support in order to be useful. I suspect the
only users are Cedric and Serge, or maybe a few others on
containers@lists.linux-foundation.org. And in fact it would only be
noticed by the users who make the assumption about how the name is
generated, rather than getting it from the /proc/<pid>/cgroups file for
the process in question.
Whether the change is actually needed or not I'm fairly agnostic on, but I
guess it is more elegant to just use the pid as the new group name rather
than adding a fairly arbitrary "node_" prefix on the front.
[menage@google.com: provided changelog] Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Cc: "Paul Menage" <menage@google.com> Cc: "Serge E. Hallyn" <serue@us.ibm.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
This patch reverts 57833ea6b95a3995149f1f6d1a8d8862ab7a0ba2
("usb-serial: pl2303: add support for RATOC REX-USB60F") and adds
support for the device to ftdi_sio driver.
This adds the Telit UC864-E HDSPA modem support to the option driver.
This lets their customers comply with the GPL instead of having to use a
binary driver from the manufacturer.
Cc: Simon Kissel <kissel@viprinet.com> Cc: Nico Erfurth <ne@nicoerfurth.de> Cc: Andrea Ghezzo <TS-EMEA@telit.com> Cc: Dietmar Staps <Dietmar.Staps@telit.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
When a share was in DFS and the server was Unix/Linux, we were sending paths of the form
\\server\share/dir/file
rather than
//server/share/dir/file
There was some discussion between me and jra over whether we should use
/server/share/dir/file
as MS sometimes says - but the documentation for this claims it should be
doubleslash for this type of UNC-like path format and that works, so leaving
it as doubleslash but converting the \ to / in the the //server/share portion.
This gets Samba to now correctly return STATUS_PATH_NOT_COVERED when it is
supposed to (Windows already did since the direction of the slash was not an issue
for them). Still need another minor change to fully enable DFS (need to finish
some chages to SMBGetDFSRefer
Signed-off-by: Steve French <sfrench@us.ibm.com> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The user_regset_view table for the 32-bit regsets on the 64-bit build had
the wrong sizes for the FP regsets. This bug had no user-visible effect
(just on kernel modules using the user_regset interfaces and the like).
But the fix is trivial and risk-free.
Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
i2c-core takes care of the possible corruption of 24RF08 chips for
quite some times, so device drivers no longer need to do it. And they
really should not, as applying the prevention twice voids it.
I thought that I had fixed all drivers long ago but apparently I had
missed that one.
Signed-off-by: Jean Delvare <khali@linux-fr.org> Cc: Ben Gardner <bgardner@wabtec.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
There is a strange chip at 0x2e on the second SMBus channel of the
DFI Lanparty NF4 Expert motherboard. Accessing the chip reboots the
system. As there's nothing interesting on this SMBus channel, the
easiest and safest thing to do is to disable it on that board.
This is a better fix to bug #5889 than the it87 driver update that was
done originally:
http://bugzilla.kernel.org/show_bug.cgi?id=5889
Signed-off-by: Jean Delvare <khali@linux-fr.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Trying to online a new memory section that was added via memory hotplug
sometimes results in crashes when the new pages are added via __free_page.
Reason for that is that the pageblock bitmap isn't initialized and hence
contains random stuff. That means that get_pageblock_migratetype()
returns also random stuff and therefore
in __free_one_page() tries to do a list_add to something that isn't even
necessarily a list.
This happens since 86051ca5eaf5e560113ec7673462804c54284456 ("mm: fix
usemap initialization") which makes sure that the pageblock bitmap gets
only initialized for pages present in a zone. Unfortunately for hot-added
memory the zones "grow" after the memmap and the pageblock memmap have
been initialized. Which means that the new pages have an unitialized
bitmap. To solve this the calls to grow_zone_span() and grow_pgdat_span()
are moved to __add_zone() just before the initialization happens.
The patch also moves the two functions since __add_zone() is the only
caller and I didn't want to add a forward declaration.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Andy Whitcroft <apw@shadowen.org> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Tejun Heo [Thu, 15 May 2008 13:14:57 +0000 (22:14 +0900)]
libata: force hardreset if link is in powersave mode
Inhibiting link PM mode doesn't bring the link back online if it's
already in powersave mode. If SRST is used in these cases, libata EH
thinks that the link is offline and fails detection. Force hardreset
if link is in powersave mode.
Signed-off-by: Tejun Heo <htejun@gmail.com> Cc: Jeff Garzik <jeff@garzik.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
This fix the uninitialized bs when we try to replace a xattr entry in
ibody with the new value which require more than free space.
This situation only happens we format ext3/4 with inode size more than 128 and
we have put xattr entries both in ibody and block. The consequences about
this bug is we will lost the xattr block which pointed by i_file_acl with all
xattr entires in it. We will alloc a new xattr block and put that large value
entry in it. The old xattr block will become orphan block.
Signed-off-by: Tiger Yang <tiger.yang@oracle.com> Cc: <linux-ext4@vger.kernel.org> Cc: Andreas Gruenbacher <agruen@suse.de> Acked-by: Andreas Dilger <adilger@sun.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
After some time (ca. 5min) or if virtual CD is ejected, device id
changes to 1410:4400:
% lsusb -v -d 1410:4400 | grep InterfaceClass
bInterfaceClass 255 Vendor Specific Class
bInterfaceClass 255 Vendor Specific Class
Variable name says that 0x5010 is a Novatel U727, but searching in
internet shows, that this device also provides virtual CD that should be
ejected before use. Product id for serial port in this case is 0x4100.
The patch below is a necessary workaround to support the Zoom Telephonics Model 3095F V.92 USB Mini External modem, which fails to initialise properly during normal probing thus:
May 3 22:53:00 imcfarla kernel: drivers/usb/class/cdc-acm.c: Zero length descriptor references
May 3 22:53:00 imcfarla kernel: cdc_acm: probe of 5-2:1.0 failed with error -22
Adding the patch below causes the probing section to be skipped, and the modem
then initialises correctly.
Signed-off-by: Iain McFarlane <iain@imcfarla.homelinux.net> Acked-by: Oliver Neukum <oneukum@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Enables the SD-Card interface on the GI 0401 HSUPA card from Option.
The unusual_devs.h entry is necessary because the device descriptor is
vendor-specific. That prevents usb-storage from binding to it as an
interface driver.
This revised patch adds a small comment explaining why and reduces the
rev range.
Xiaofan Chen [Wed, 14 May 2008 19:20:51 +0000 (19:20 +0000)]
USB: remove PICDEM FS USB demo (04d8:000c) device from ldusb
commit 5fc89390f74ac42165db477793fb30f6a200e79c upstream
Date: Tue, 13 May 2008 21:52:00 +0800
Subject: USB: remove PICDEM FS USB demo (04d8:000c) device from ldusb
Microchip has changed the PICDEM FS USB demo device (0x04d8:000c)
to use bulk transfer and not interrupt transfer. So I've updated the libusb
based program here (Post #31).
http://forum.microchip.com/tm.aspx?m=106426&mpage=2
So I believe that the in-kernel ldusb driver will no longer work with the
demo firmware. It should be removed.
Signed-off-by: Xiaofan Chen <xiaofanc@gmail.com> Cc: Michael Hund <MHund@LD-Didactic.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
This fixes a regression reported by Kamalesh Bulabel where a POWER4
machine would crash because of an SLB miss at a point where the SLB
miss exception was unrecoverable. This regression is tracked at:
http://bugzilla.kernel.org/show_bug.cgi?id=10082
SLB misses at such points shouldn't happen because the kernel stack is
the only memory accessed other than things in the first segment of the
linear mapping (which is mapped at all times by entry 0 of the SLB).
The context switch code ensures that SLB entry 2 covers the kernel
stack, if it is not already covered by entry 0. None of entries 0
to 2 are ever replaced by the SLB miss handler.
Where this went wrong is that the context switch code assumes it
doesn't have to write to SLB entry 2 if the new kernel stack is in the
same segment as the old kernel stack, since entry 2 should already be
correct. However, when we start up a secondary cpu, it calls
slb_initialize, which doesn't set up entry 2. This is correct for
the boot cpu, where we will be using a stack in the kernel BSS at this
point (i.e. init_thread_union), but not necessarily for secondary
cpus, whose initial stack can be allocated anywhere. This doesn't
cause any immediate problem since the SLB miss handler will just
create an SLB entry somewhere else to cover the initial stack.
In fact it's possible for the cpu to go quite a long time without SLB
entry 2 being valid. Eventually, though, the entry created by the SLB
miss handler will get overwritten by some other entry, and if the next
access to the stack is at an unrecoverable point, we get the crash.
This fixes the problem by making slb_initialize create a suitable
entry for the kernel stack, if we are on a secondary cpu and the stack
isn't covered by SLB entry 0. This requires initializing the
get_paca()->kstack field earlier, so I do that in smp_create_idle
where the current field is initialized. This also abstracts a bit of
the computation that mk_esid_data in slb.c does so that it can be used
in slb_initialize.
Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
- Don't trust a length which is greater than the working buffer.
An invalid length could cause overflow when calculating buffer size
for decoding oid.
- An oid length of zero is invalid and allows for an off-by-one error when
decoding oid because the first subid actually encodes first 2 subids.
- A primitive encoding may not have an indefinite length.
Thanks to Wei Wang from McAfee for report.
Cc: Steven French <sfrench@us.ibm.com> Cc: stable@kernel.org Acked-by: Patrick McHardy <kaber@trash.net> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
commit bd2ab67030e9116f1e4aae1289220255412b37fd "md: close a livelock window
in handle_parity_checks5" introduced a bug in handling 'repair' operations.
After a repair operation completes we clear the state bits tracking this
operation. However, they are cleared too early and this results in the code
deciding to re-run the parity check operation. Since we have done the repair
in memory the second check does not find a mismatch and thus does not do a
writeback.
The input argument to rtc_time_to_tm() is unsigned as well as are members of
the output structure. However signed arithmetic is used within for
calculations leading to incorrect results for input values outside the signed
positive range. If this happens the time of day returned is out of range.
Found the problem when fiddling with the RTC and the driver where year was set
to an unexpectedly large value like 2070, e.g.:
Reported-by: Frank de Jong <frapex@xs4all.nl>
> [1.] One line summary of the problem:
> linux-2.6.25.3, aha152x'->init suspiciously returned 1, it should
> follow 0/-E convention. The module / driver works okay. Unloading the
> module is impossible.
The driver is apparently returning 0 on failure and 1 on success.
That's a bit unfortunate. Fix it by altering to -ENODEV and 0.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The problem is that the driver calls aha152x_release() under a
list_for_each_entry(). Unfortunately, aha152x_release() deletes from
the list in question. Fix this by using list_for_each_entry_safe().
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
If the ping tmo is longer than the recv tmo then we could miss a window
where we were supposed to check the recv tmo. This happens because
the ping code will set the next timeout for the ping timeout, and if the
ping executes quickly there will be a long chunk of time before the
timer wakes up again.
This patch has the ping processing code kick off a recv
tmo check when getting a nop in response to our ping.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The following patch fixes a bug in the iscsi nop processing.
The target sends iscsi nops to ping the initiator and the
initiator has to send nops to reply and can send nops to
ping the target.
In 2.6.25 we moved the nop processing to the kernel to handle
problems when the userspace daemon is not up, but the target
is pinging us, and to handle when scsi commands timeout, but
the transport may be the cause (we can send a nop to check
the transport). When we added this code we added a bug where
if the transport timer wakes at the exact same time we are supposed to check
for a nop timeout we drop the session instead of checking the transport.
This patch checks if a iscsi ping is outstanding and if the ping has
timed out, to determine if we need to signal a connection problem.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The qla1280 driver was ANDing the output value of mailbox register
0 with (1 << target-number) to determine whether to enable queueing
on the target in question.
But mailbox register 0 has the status code for the mailbox command
(in this case, Set Target Parameters). Potential values are:
/*
* ISP mailbox command complete status codes
*/
So clearly that is in error. I can't think what the author of that
line was looking for in a mailbox register, so I just eliminated the
AND. flag is used later in the function, and I think that the later
usage was also wrong, though it was used to set values that aren't
used. Oh well, an overhaul of this driver is not what I want to do
now -- just a bugfix.
After the fix, I found that my disks were getting a queue depth of
255, which is far too many. Most SCSI disks are limited to 32 or
64. In any case, there's no point, queueing up a bunch of commands
to the adapter that will just result in queue full or starve other
targets from being issued commands due to running out of internal
memory. So I dropped default queue depth to 32 (from which 1 is
subtracted elsewhere, giving net of 31).
I tested with a Seagate ST336753LC, and results look good, so
I'm satisfied with this patch.
Signed-off-by: Jeremy Higdon <jeremy@sgi.com> Acked-by: Jes Sorensen <jes@sgi.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The following patch fixes a [probable] copy & paste mistake in
airprime.c. Instead of unlocking an acquired mutex, the actual
code tries to lock it again.
So, forever, we've had this ptrace_signal_deliver implementation
which tries to handle all of the nasties that can occur when the
debugger looks at a process about to take a signal. It's meant
to address all of these issues inside of the kernel so that the
debugger need not be mindful of such things.
Problem is, this doesn't work.
The idea was that we should do the syscall restart business first, so
that the debugger captures that state. Otherwise, if the debugger for
example saves the child's state, makes the child execute something
else, then restores the saved state, we won't handle the syscall
restart properly because we lose the "we're in a syscall" state.
The code here worked for most cases, but if the debugger actually
passes the signal through to the child unaltered, it's possible that
we would do a syscall restart when we shouldn't have.
In particular this breaks the case of debugging a process under a gdb
which is being debugged by yet another gdb. gdb uses sigsuspend
to wait for SIGCHLD of the inferior, but if gdb itself is being
debugged by a top-level gdb we get a ptrace_stop(). The top-level gdb
does a PTRACE_CONT with SIGCHLD to let the inferior gdb see the
signal. But ptrace_signal_deliver() assumed the debugger would cancel
out the signal and therefore did a syscall restart, because the return
error was ERESTARTNOHAND.
Fix this by simply making ptrace_signal_deliver() a nop, and providing
a way for the debugger to control system call restarting properly:
1) Report a "in syscall" software bit in regs->{tstate,psr}.
It is set early on in trap entry to a system call and is fully
visible to the debugger via ptrace() and regsets.
2) Test this bit right before doing a syscall restart. We have
to do a final recheck right after get_signal_to_deliver() in
case the debugger cleared the bit during ptrace_stop().
3) Clear the bit in trap return so we don't accidently try to set
that bit in the real register.
As a result we also get a ptrace_{is,clear}_syscall() for sparc32 just
like sparc64 has.
M68K has this same exact bug, and is now the only other user of the
ptrace_signal_deliver hook. It needs to be fixed in the same exact
way as sparc.
Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
We had a report that running sensors-detect on a Sapphire AM2RD790
motherbord killed the CPU. While the exact cause is still unknown,
I'd rather play it safe and prevent any access to the SMBus on that
machine by not letting the i2c-piix4 driver attach to the SMBus host
device on that machine. Also blacklist a similar board made by DFI.
Signed-off-by: Jean Delvare <khali@linux-fr.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
System topology on intel based system needs to be exported
for non-numa case as well.
All parts of asm-i386/topology.h has come under
#ifdef CONFIG_NUMA after the merge to asm-x86/topology.h
/sys/devices/system/cpu/cpu?/topology/* is populated based on
ENABLE_TOPO_DEFINES
The sysfs cpu topology is not being populated on my dual socket
dual core xeon 5160 processor based (x86 32 bit) system.
CONFIG_NUMA is not set in my case yet the topology is relevant
and useful.
irqbalance daemon application depends on topology to build the
cpus and package list and it fails on Fedora9 beta since the
sysfs topology was not being populated in the 2.6.25 kernel.
I am not sure if it was intentional to not define ENABLE_TOPO_DEFINES
for non-numa systems.
This fix has been tested on the above mentioned dual core, dual socket
system.
On certain configurations (certain macbooks), even though all the
conditions for SIDPR access described in the datasheet are met,
actually reading those registers just returns 0 and have no effect on
write. Verify SIDPR is actually working before enabling it.
This is reported by Ryan Roth in bz#10512.
Signed-off-by: Tejun Heo <htejun@gmail.com> Cc: Ryan Roth <ryan.roth@ch2m.com> Signed-off-by: Jeff Garzik <jgarzik@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
While reinjecting *bigger* modified versions of IPv6 packets using
libnetfilter_queue, things work fine on a 2.6.24 kernel (2.6.22 too)
but I get the following on recents kernels (2.6.25, trace below is
against today's net-2.6 git tree):
Looking at the code, I ended up in nfq_mangle() function (called by
nfqnl_recv_verdict()) which performs a call to skb_copy_expand() due to
the increased size of data passed to the function. AFAICT, it should ask
for 'diff' instead of 'diff - skb_tailroom(e->skb)'. Because the
resulting sk_buff has not enough space to support the skb_put(skb, diff)
call a few lines later, this results in the call to skb_over_panic().
The patch below asks for allocation of a copy with enough space for
mangled packet and the same amount of headroom as old sk_buff. While
looking at how the regression appeared (e2b58a67), I noticed the same
pattern in ipq_mangle_ipv6() and ipq_mangle_ipv4(). The patch corrects
those locations too.
Tested with bigger reinjected IPv6 packets (nfqnl_mangle() path), things
are ok (2.6.25 and today's net-2.6 git tree).
Signed-off-by: Arnaud Ebalard <arno@natisbad.org> Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0794935e "[NETFILTER]: nf_conntrack: optimize hash_conntrack()"
results in ARM platforms hashing uninitialised padding. This padding
doesn't exist on other architectures.
Fix this by replacing NF_CT_TUPLE_U_BLANK() with memset() to ensure
everything is initialised. There were only 4 bytes that
NF_CT_TUPLE_U_BLANK() wasn't clearing anyway (or 12 bytes on ARM).
Signed-off-by: Philip Craig <philipc@snapgear.com> Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
In 2.6.23, if you unpacked a kernel source tarball and then
ran "make menuconfig" you'd be presented with this message:
# using defaults found in arch/i386/defconfig
and the default options would be set.
The same thing in 2.6.24 does not give you any "using defaults" message, and
the default config options within menuconfig are rather blank (e.g. no PCI
support). You can work around this by explicitly running "make defconfig"
before menuconfig, but it would be nice to have the behaviour the way it was
for 2.6.23 (and the way it still is for other archs).
Fixed by adding a x86 specific defconfig list to Kconfig.
Fixes: http://bugzilla.kernel.org/show_bug.cgi?id=10470 Tested-by: Daniel Drake <dsd@gentoo.org> Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The tx packet counting and the local loopback of CAN frames should
only happen in the case that the CAN frame has been enqueued to the
netdevice tx queue successfully.
Thanks to Andre Naujoks <nautsch@gmail.com> for reporting this issue.
Signed-off-by: Oliver Hartkopp <oliver@hartkopp.net> Signed-off-by: Urs Thuermann <urs@isnogud.escape.de> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
dccp_feat_change() validates length and on error is returning 1.
This happens to work since call chain is checking for 0 == success,
but this is returned to userspace, so make it a real error value.
Signed-off-by: Chris Wright <chrisw@sous-sol.org> Acked-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Fixes bug http://bugzilla.kernel.org/show_bug.cgi?id=10556
where conn templates with protocol=IPPROTO_IP can oops backup box.
Result from ip_vs_proto_get() should be checked because
protocol value can be invalid or unsupported in backup. But
for valid message we should not fail for templates which use
IPPROTO_IP. Also, add checks to validate message limits and
connection state. Show state NONE for templates using IPPROTO_IP.
Fix tested and confirmed by L0op8ack <l0op8ack@hotmail.com>
Signed-off-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
As noticed by Ben Greear, macvlan crashes the kernel when unloading the
module. The reason is that it tries to clean up the macvlan_port pointer
on the macvlan device itself instead of the underlying device. A non-NULL
pointer is taken as indication that the macvlan_handle_frame_hook is
valid, when receiving the next packet on the underlying device it tries
to call the NULL hook and crashes.
Clean up the macvlan_port on the correct device to fix this.
Signed-off-by; Patrick McHardy <kaber@trash.net> Tested-by: Ben Greear <greearb@candelatech.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
There is lack of removing a class from the event queue while changing
from parent to leaf which can cause corruption of this rb tree. This
patch fixes a bug introduced by my patch: "sch_htb: turn intermediate
classes into leaves" commit: 160d5e10f87b1dc88fd9b84b31b1718e0fd76398.
Many thanks to Jan 'yanek' Bortl for finding a way to reproduce this
rare bug and narrowing the test case, which made possible proper
diagnosing.
This patch is recommended for all kernels starting from 2.6.20.
Reported-and-tested-by: Jan 'yanek' Bortl <yanek@ya.bofh.cz> Signed-off-by: Jarek Poplawski <jarkao2@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
As reported by Jos van der Ende, ever since commit 5a606b72a4309a656cd1a19ad137dc5557c4b8ea ("[SPARC64]: Do not ACK an
INO if it is disabled or inprogress.") sun4u interrupts
can get stuck.
What this changset did was add the following conditional to
the various IRQ chip ->enable() handlers on sparc64:
if (unlikely(desc->status & (IRQ_DISABLED|IRQ_INPROGRESS)))
return;
which is correct, however it means that special care is needed
in the ->enable() method.
Specifically we must put the interrupt into IDLE state during
an enable, or else it might never be sent out again.
Setting the INO interrupt state to IDLE resets the state machine,
the interrupt input to the INO is retested by the hardware, and
if an interrupt is being signalled by the device, the INO
moves back into TRANSMIT state, and an interrupt vector is sent
to the cpu.
The two sun4v IRQ chip handlers were already doing this properly,
only sun4u got it wrong.
Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
We need to be more liberal about the alignment of the buffer given to
us by sigaltstack(). The user should not need to be mindful of all of
the alignment constraints we have for the stack frame.
This mirrors how we handle this situation in clone() as well.
Also, we align the stack even in non-SA_ONSTACK cases so that signals
due to bad stack alignment can be delivered properly. This makes such
errors easier to debug and recover from.
Finally, add the sanity check x86 has to make sure we won't overflow
the signal stack.
This fixes glibc testcases nptl/tst-cancel20.c and
nptl/tst-cancelx20.c
Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
I have a sparcstation 20 clone with a lot of on board serial ports.
The serial core code assumes that uarts are assigned contiguously
and that may not be the case when there are multiple zs devices
present. This patch insures that uart chips are placed in front of
keyboard/mouse chips in the port table.
ffd37420: ttyS0 at MMIO 0xf1100000 (irq = 44) is a zs (ESCC)
Console: ttyS0 (SunZilog zs0)
console [ttyS0] enabled ffd37420: ttyS1 at MMIO 0xf1100004 (irq = 44) is a zs (ESCC) ffd37500: Keyboard at MMIO 0xf1000000 (irq = 44) is a zs ffd37500: Mouse at MMIO 0xf1000004 (irq = 44) is a zs ffd3c5c0: ttyS2 at MMIO 0xf1100008 (irq = 44) is a zs (ESCC) ffd3c5c0: ttyS3 at MMIO 0xf110000c (irq = 44) is a zs (ESCC) ffd3c6a0: ttyS4 at MMIO 0xf1100010 (irq = 44) is a zs (ESCC) ffd3c6a0: ttyS5 at MMIO 0xf1100014 (irq = 44) is a zs (ESCC) ffd3c780: ttyS6 at MMIO 0xf1100018 (irq = 44) is a zs (ESCC) ffd3c780: ttyS7 at MMIO 0xf110001c (irq = 44) is a zs (ESCC)
Signed-off-by: Robert Reif <reif@earthlink.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Flowlabel text format was not correct and thus ambiguous.
For example, 0x00123 or 0x01203 are formatted as 0x123.
This is not what audit tools want.
Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Drivers in the ohci-hcd family should perform certain tasks whenever
their controller device is resumed. These include checking for loss
of power during suspend, turning on port power, and enabling interrupt
requests.
Until now these jobs have been carried out when the root hub is
resumed, not when the controller is. Many drivers work around the
resulting awkwardness by automatically resuming their root hub
whenever the controller is resumed. But this is wasteful and
unnecessary.
In 2.6.25, ohci-pci doesn't even do that. After waking up from
hibernation, it simply leaves the controller in a RESET state, which
is useless.
To simplify the situation, this patch (as1066b) adds a new core
routine, ohci_finish_controller_resume(), which can be used by all the
OHCI-variant drivers. They can call the new routine instead of
resuming their root hubs. And ohci-pci.c can call it instead of using
its own special-purpose handler.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>