Authenc works in two stages for encryption, it first encrypts and
then computes an ICV. The context memory of the request is used
by both operations. The problem is that when an asynchronous
encryption completes, we will compute the ICV and then reread the
context memory of the encryption to get the original request.
It just happens that we have a buffer of 16 bytes in front of the
request pointer, so ICVs of 16 bytes (such as SHA1) do not trigger
the bug. However, any attempt to uses a larger ICV instantly kills
the machine when the first asynchronous encryption is completed.
This patch fixes this by saving the request pointer before we start
the ICV computation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Fix the checksum feature advertised in device flags. The hardware support
TCP/UDP over IPv4 and TCP/UDP over IPv6 (without IPv6 extension headers).
However, the kernel feature flags do not distinguish IPv6 with/without
extension headers.
Therefore, the driver needs to use NETIF_F_IP_CSUM instead of
NETIF_F_HW_CSUM since the latter includes all IPv6 packets.
A future patch can be created to check for extension headers and perform
software checksum calculation.
Signed-off-by: Ayaz Abdulla <aabdulla@nvidia.com> Cc: Jeff Garzik <jgarzik@pobox.com> Cc: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jeff Garzik <jgarzik@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
I have gotten to the root cause of the hugetlb badness I reported back on
August 15th. My system has the following memory topology (note the
overlapping node):
setup_zone_migrate_reserve() scans the address range 0x0-0x8000000 looking
for a pageblock to move onto the MIGRATE_RESERVE list. Finding no
candidates, it happily continues the scan into 0x8000000-0x44000000. When
a pageblock is found, the pages are moved to the MIGRATE_RESERVE list on
the wrong zone. Oops.
setup_zone_migrate_reserve() should skip pageblocks in overlapping nodes.
Signed-off-by: Adam Litke <agl@us.ibm.com> Acked-by: Mel Gorman <mel@csn.ul.ie> Cc: Dave Hansen <dave@linux.vnet.ibm.com> Cc: Nishanth Aravamudan <nacc@us.ibm.com> Cc: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Thanks to Johann Dahm and David Richter for bug report and testing.
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu> Cc: David Richter <richterd@citi.umich.edu> Tested-by: Johann Dahm <jdahm@umich.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Vegard Nossum reported
----------------------
> I noticed that something weird is going on with /proc/sys/sunrpc/transports.
> This file is generated in net/sunrpc/sysctl.c, function proc_do_xprt(). When
> I "cat" this file, I get the expected output:
> $ cat /proc/sys/sunrpc/transports
> tcp 1048576
> udp 32768
> But I think that it does not check the length of the buffer supplied by
> userspace to read(). With my original program, I found that the stack was
> being overwritten by the characters above, even when the length given to
> read() was just 1.
David Wagner added (among other things) that copy_to_user could be
probably used here.
Ingo Oeser suggested to use simple_read_from_buffer() here.
The conclusion is that proc_do_xprt doesn't check for userside buffer
size indeed so fix this by using Ingo's suggestion.
Reported-by: Vegard Nossum <vegard.nossum@gmail.com> Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> CC: Ingo Oeser <ioe-lkml@rameria.de> Cc: Neil Brown <neilb@suse.de> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Greg Banks <gnb@sgi.com> Cc: Tom Tucker <tom@opengridcomputing.com> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
On Wed, Jul 23, 2008 at 03:52:36PM +0300, Andrei Popa wrote:
> I installed gnokii-0.6.22-r2 and gave the command "gnokii --identify"
> and the kernel oopsed:
>
> BUG: unable to handle kernel NULL pointer dereference at 00000458
> IP: [<c0444b52>] mutex_unlock+0x0/0xb
> [<c03830ae>] acm_tty_open+0x4c/0x214
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Tested-by: Andrei Popa <andrei.popa@i-neo.ro> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Joshua Hoblitt reported that only 3 GB of his 16 GB of RAM is
usable. Booting with mtrr_show showed us the BIOS-initialized
MTRR settings - which are all wrong.
So the root cause is that the BIOS has not set the mask correctly:
As there's no point in adding a fixed-fudge value (originally 5
seconds), honor the user settings only. We also remove the
driver's dead-callback get_rport_dev_loss_tmo function
(qla2x00_get_rport_loss_tmo()).
Signed-off-by: Andrew Vasquez <andrew.vasquez@qlogic.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
yesterday I tried to reactivate my old 486 box and wanted to install a
current Linux with latest kernel on it. But it turned out that the
latest kernel does not boot because the machine crashes early in the
setup code.
After some debugging it turned out that the problem is the query_ist()
function. If this interrupt with that function is called the machine
simply locks up. It looks like a BIOS bug. Looking for a workaround for
this problem I wrote the attached patch. It checks for the CPUID
instruction and if it is not implemented it does not call the speedstep
BIOS function. As far as I know speedstep should be available since some
Pentium earliest.
Alan Cox observed that it's available since the Pentium II, so cpuid
levels 4 and 5 can be excluded altogether.
H. Peter Anvin cleaned up the code some more:
> Right in concept, but I dislike the implementation (duplication of the
> CPU detect code we already have). Could you try this patch and see if
> it works for you?
which, with a small modification to fix a build error with it the
resulting kernel boots on my machine.
Fix a NULL pointer dereference that happened when calling
i2c_new_probed_device on one of the addresses for which we use byte
reads instead of quick write for detection purpose (that is: 0x30-0x37
and 0x50-0x5f).
Signed-off-by: Hans Verkuil <hverkuil@xs4all.nl> Signed-off-by: Jean Delvare <khali@linux-fr.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Fix a range check in netfilter IP NAT for SNMP to always use a big enough size
variable that the compiler won't moan about comparing it to ULONG_MAX/8 on a
64-bit platform.
Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net> Cc: Eugene Teo <eteo@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The magic write to register 0x82 will often cause PCI config space on
my 8168 (PCI ID 10ec:8168, revision 2. mounted in an LG P300 laptop)
to be filled with ones during driver load, and thus breaking NIC
operation until reboot. If it does not happen on first driver load it
can easily be reproduced by unloading and loading the driver a few
times.
The magic write was added long ago by this commit:
Author: François Romieu <romieu@fr.zoreil.com>
Date: Sat Jan 10 06:00:46 2004 -0500
[netdrvr r8169] Merge of changes done by Realtek to rtl8169_init_one():
- phy capability settings allows lower or equal capability as suggested
in Realtek's changes;
- I/O voodoo;
- no need to s/mdio_write/RTL8169_WRITE_GMII_REG/;
- s/rtl8169_hw_PHY_config/rtl8169_hw_phy_config/;
- rtl8169_hw_phy_config(): ad-hoc struct "phy_magic" to limit duplication
of code (yep, the u16 -> int conversions should work as expected);
- variable renames and whitepace changes ignored.
As the 8168 wasn't supported by that version this patch simply removes
the bogus write from mac versions <= RTL_GIGA_MAC_VER_06.
[The change above makes sense for the 8101/8102 too -- Ueimor]
I have a new PCI-E radeon RV380 series card (PCI device ID 5b64) that
hangs in my sparc64 boxes when the init scripts set the font. The problem
goes away if I disable acceleration.
I haven't figured out that bug yet, but along the way I found some
corrections to make based upon some auditing.
1) The RB2D_DC_FLUSH_ALL value used by the kernel fb driver
and the XORG video driver differ. I've made the kernel
match what XORG is using.
2) In radeonfb_engine_reset() we have top-level code structure
that roughly looks like:
if (family is 300, 350, or V350)
do this;
else
do that;
...
if (family is NOT 300, OR
family is NOT 350, OR
family is NOT V350)
do another thing;
this last conditional makes no sense, is always true,
and obviously was likely meant to be "family is NOT
300, 350, or V350". So I've made the code match the
intent.
Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
My copying of linux/init.h didn't go far enough. The definition of
__used singled out gcc minor version 3, but didn't care what the major
version was. This broke when unit-at-a-time was added and gcc started
throwing out initcalls.
This results in an early boot crash when ptrace tries to initialize a
process with an empty, uninitialized register set.
Signed-off-by: Jeff Dike <jdike@linux.intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Benny Halevy is seeing extern inlines not resolved with gcc 4.3 with
no-unit-at-a-time
This patch reintroduces unit-at-a-time for gcc >= 4.0, bringing back the
possibility of Uli's crash. If that happens, we'll debug it.
I started seeing both the internal compiler errors and unresolved
inlines on Fedora 9. This patch fixes both problems, without so far
reintroducing the crash reported by Uli.
Signed-off-by: Jeff Dike <jdike@linux.intel.com> Cc: Benny Halevy <bhalevy@panasas.com> Cc: Adrian Bunk <bunk@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Ulrich Drepper <drepper@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Fedora broke PTRACE_SYSEMU again, and UML crashes as a result when it
doesn't need to. This patch makes the PTRACE_SYSEMU check fail gracefully
and makes UML fall back to PTRACE_SYSCALL.
Signed-off-by: Jeff Dike <jdike@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
x86_64 defines either memcpy or __memcpy depending on the gcc version, and
it looks like UML needs to follow that in its exporting.
Cc: Gabriel C <nix.or.die@googlemail.com> Signed-off-by: Jeff Dike <jdike@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch makes os_get_task_size locate the bottom of the address space,
as well as the top. This is for systems which put a lower limit on mmap
addresses. It works by manually scanning pages from zero onwards until a
valid page is found.
Because the bottom of the address space may not be zero, it's not
sufficient to assume the top of the address space is the size of the
address space. The size is the difference between the top address and
bottom address.
[jdike@addtoit.com: changed the name to reflect that this function is
supposed to return the top of the process address space, not its size and
changed the return value to reflect that. Also some minor formatting
changes]
Signed-off-by: Tom Spink <tspink@gmail.com> Signed-off-by: Jeff Dike <jdike@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Protection against the host's time going backwards (eg, ntp activity on
the host) by keeping track of the time at the last tick and if it's
greater than the current time, keep time stopped until the host catches
up.
Cc: Nix <nix@esperi.org.uk> Signed-off-by: Jeff Dike <jdike@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Alarm delivery could be noticably late in the !CONFIG_NOHZ case because lost
ticks weren't being taken into account. This is now treated more carefully,
with the time between ticks being calculated and the appropriate number of
ticks delivered to the timekeeping system.
Cc: Nix <nix@esperi.org.uk> Signed-off-by: Jeff Dike <jdike@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The top of physical memory should be below the initial process stack, not the
top of the address space, at least for as long as the stack isn't known to the
kernel VM system and appropriately reserved.
Cc: "Christopher S. Aker" <caker@theshore.net> Signed-off-by: Jeff Dike <jdike@linux.intel.com> Cc: WANG Cong <xiyou.wangcong@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
UML's supposed nanosecond clock interacts badly with NTP when NTP
decides that the clock has drifted ahead and needs to be slowed down.
Slowing down the clock is done by decrementing the cycle-to-nanosecond
multiplier, which is 1. Decrementing that gives you 0 and time is
stopped.
This is fixed by switching to a microsecond clock, with a multiplier
of 1000.
Signed-off-by: Jeff Dike <jdike@linux.intel.com> Cc: WANG Cong <xiyou.wangcong@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Reintroduce uml_kmalloc for the benefit of UML libc code. The
previous tactic of declaring __kmalloc so it could be called directly
from the libc side of the house turned out to be getting too intimate
with slab, and it doesn't work with slob.
So, the uml_kmalloc wrapper is back. It calls kmalloc or whatever
that translates into, and libc code calls it.
kfree is left alone since that still works, leaving a somewhat
inconsistent API.
Signed-off-by: Jeff Dike <jdike@linux.intel.com> Cc: WANG Cong <xiyou.wangcong@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Correct sparc64's implementation of FUTEX_OP_ANDN to do a
bitwise negate of the oparg parameter before applying the
AND operation. All other archs that support FUTEX_OP_ANDN
either negate oparg explicitly (frv, ia64, mips, sh, x86),
or do so indirectly by using an and-not instruction (powerpc).
Since sparc64 has and-not, I chose to use that solution.
I've not found any use of FUTEX_OP_ANDN in glibc so the
impact of this bug is probably minor. But other user-space
components may try to use it so it should still get fixed.
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
SCTP used ip6_xmit() to send fragments after received ICMP packet too
big message. But while send packet used ip6_xmit, the skb->local_df is
not initialized. So when skb if enter ip6_fragment(), the following
code will discard the skb.
SCTP do the following step:
1. send packet ip6_xmit(skb, ipfragok=0)
2. received ICMP packet too big message
3. if PMTUD_ENABLE: ip6_xmit(skb, ipfragok=1)
This patch fixed the problem by set local_df if ipfragok is true.
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The rationale is:
* use u32 consistently
* no need to do LCG on values from (better) get_random_bytes
* use more data from get_random_bytes for secondary seeding
* don't reduce state space on srandom32()
* enforce state variable initialization restrictions
Note: the second paper has a version of random32() with even longer period
and a version of random64() if needed.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
In the old acer_acpi, I discovered that on some of the newer AMW0 laptops
that supported the WMID methods, they don't work properly for setting the
wireless and bluetooth values.
So for the AMW0 V2 laptops, we want to use both the 'old' AMW0 and the
'new' WMID methods for setting wireless & bluetooth to guarantee we always
enable it.
This was fixed in acer_acpi some time ago, but I forgot to port the patch
over to acer-wmi when it was merged.
(Without this patch, early AMW0 V2 laptops such as the Aspire 5040 won't
work with acer-wmi, where-as they did with the old acer_acpi).
Aesthetic regards aside, commit e8e7b9eb11c34ee18bde8b7011af41938d1ad667
still leaves a bug in the error message, because it uses the unconverted
big-endian value for printk.
Fix this by using a local variable in machine byte order. The result is
correct, more readable, and also produces slightly shorter code on i386.
Signed-off-by: Petr Tesarik <ptesarik@suse.cz> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Jan Kara <jack@suse.cz> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: <stable@kernel.org> Acked-by: Borislav Petkov <petkovbb@gmail.com>
[bart: __u32 -> u32] Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
There is a slight chance for a deadlock in the estimator code. We can't call
del_timer_sync() while holding our lock, as the timer might be active and
spinning for the lock on another cpu. Work around this issue by using
try_to_del_timer_sync() and releasing the lock. We could actually delete the
timer outside of our lock, as the add and kill functions are only every called
from userspace via [gs]etsockopt() and are serialized by a mutex, but better
make this explicit.
Signed-off-by: Sven Wegener <sven.wegener@stealer.net> Acked-by: Simon Horman <horms@verge.net.au> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The machine will crash if the i2c_attach_client() or maven_init_client()
calls fail, although nobody has yet reported this happening.
Signed-off-by: Jean Delvare <khali@linux-fr.org> Acked-by: Krzysztof Helt <krzysztof.h1@wp.pl> Cc: Petr Vandrovec <VANDROVE@vc.cvut.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
addr = 0;
while (1)
{
/* map a page into memory each time*/
if ((addr = (char *) mmap(addr,page_size, PROT_READ |
PROT_WRITE,MAP_SHARED,fd,0)) == MAP_FAILED)
{
printf("cant do mmap on file\n");
exit(1);
}
if (0 == i)
addr1 = addr;
i++;
errno = 0;
/* lock the mapped memory pagewise*/
if ((ret = mlock((char *)addr, 1500)) == -1)
{
printf("errno value is %d\n", errno);
printf("cant lock maped region\n");
exit(1);
}
addr = addr + page_size;
}
}
======================================================
This testcase results in an mlock() failure with errno 14 that is EFAULT,
but it has nowhere been specified that mlock() will return EFAULT. When I
tested the same on older kernels like 2.6.18, I got the correct result i.e
errno 12 (ENOMEM).
I think in source code mlock(2), setting errno ENOMEM has been missed in
do_mlock() , on mlock_fixup() failure.
SUSv3 requires the following behavior frmo mlock(2).
[ENOMEM]
Some or all of the address range specified by the addr and
len arguments does not correspond to valid mapped pages
in the address space of the process.
[EAGAIN]
Some or all of the memory identified by the operation could not
be locked when the call was made.
This rule isn't so nice and slighly strange. but many people think
POSIX/SUS compliance is important.
The bug was reported and analysed by Mark McLoughlin <markmc@redhat.com>,
the patch is based on his and Roland's suggestions.
posix_timer_event() always rewrites the pre-allocated siginfo before sending
the signal. Most of the written info is the same all the time, but memset(0)
is very wrong. If ->sigq is queued we can race with collect_signal() which
can fail to find this siginfo looking at .si_signo, or copy_siginfo() can
copy the wrong .si_code/si_tid/etc.
In short, sys_timer_settime() can in fact stop the active timer, or the user
can receive the siginfo with the wrong .si_xxx values.
Move "memset(->info, 0)" from posix_timer_event() to alloc_posix_timer(),
change send_sigqueue() to set .si_overrun = 0 when ->sigq is not queued.
It would be nice to move the whole sigq->info initialization from send to
create path, but this is not easy to do without uglifying timer_create()
further.
As Roland rightly pointed out, we need more cleanups/fixes here, see the
"FIXME" comment in the patch. Hopefully this patch makes sense anyway, and
it can mask the most bad implications.
Reported-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Cc: Mark McLoughlin <markmc@redhat.com> Cc: Oliver Pinter <oliver.pntr@gmail.com> Cc: Roland McGrath <roland@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Some chips appear to have the 2D engine hang during screen redraw,
typically in a sequence of copyarea operations. This appear to be
solved by adding a flush of the engine destination pixel cache
and waiting for the engine to be idle before issuing the accel
operation. The performance impact seems to be fairly small.
Here is a trace on an RV370 (PCI device ID 0x5b64), it records the
RBBM_STATUS register, then the source x/y, destination x/y, and
width/height used for the copy:
When things are going fine the copies complete before the next ROP is
even issued, but all of a sudden the 2D unit becomes active (bit 17 in
RBBM_STATUS) and the FIFO retry (bit 13) and FIFO pipeline busy (bit
14) are set as well. The FIFO begins to backup until it becomes full.
What happens next is the radeon_fifo_wait() times out, and we access
the chip illegally leading to a bus error which usually wedges the
box. None of this makes it to the console screen, of course :-)
radeon_fifo_wait() should be modified to reset the accelerator when
this timeout happens instead of programming the chip anyways.
Another quirk is that these copyarea calls will not happen until the
first drivers/char/vt.c:redraw_screen() occurs. This will only happen
if you 1) VC switch or 2) run "consolechars" or 3) unblank the screen.
This seems to happen because until a redraw_screen() the screen scrolling
method used by fbcon is not finalized yet. I've seen this with other fb
drivers too.
So if all you do is boot straight into X you will never see this bug on
the relevant chips.
Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
In relay's current read implementation, if the buffer is completely full
but hasn't triggered the buffer-full condition (i.e. the last write
didn't cross the subbuffer boundary) and the last subbuffer is exactly
full, the subbuffer accounting code erroneously finds nothing available.
This patch fixes the problem.
Signed-off-by: Tom Zanussi <tzanussi@gmail.com> Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org> Cc: Andrea Righi <righi.andrea@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
It seems cdrwtool in the udftools has been unusable on "modern" kernels
for some time. A Google search reveals many people with the same issue
but no solution (cdrwtool fails to format the disk). After spending some
time tracking down the issue, it comes down to the following:
The udftools still use the older CDROM_SEND_PACKET interface to send
things like FORMAT_UNIT through to the drive. They should really be
updated, but that's another story. Since most distros are using libata
now, the cd or dvd burner appears as a SCSI device, and we wind up in
block/scsi_ioctl.c. Here, the code tries to take the "struct
cdrom_generic_command" and translate it and stuff it into a "struct
sg_io_hdr" structure so it can pass it to the modern sg_io() routine
instead. Unfortunately, there is one error, or rather an omission in the
translation. The timeout that is passed in in the "struct
cdrom_generic_command" is in HZ=100 units, and this is modified and
correctly converted to jiffies by use of clock_t_to_jiffies(). However,
a little further down, this cgc.timeout value in jiffies is simply
copied into the sg_io_hdr timeout, which should be in milliseconds.
Since most modern x86 kernels seems to be getting build with HZ=250, the
timeout that is passed to sg_io and eventually converted to the
timeout_per_command member of the scsi_cmnd structure is now four times
too small. Since cdrwtool tries to set the timeout to one hour for the
FORMAT_UNIT command, and it takes about 20 minutes to format a 4x CDRW,
the SCSI error-handler kicks in after the FORMAT_UNIT completes because
it took longer than the incorrectly-calculated timeout.
[jejb: fix up whitespace] Signed-off-by: Tim Wright <timw@splhi.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: HighPoint Linux Team <linux@highpoint-tech.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The class_device->device conversion is causing an oops in revalidate
because it's assuming that the device_for_each_child iterator will only
return struct scsi_device children. The conversion made all former
class_devices children of the device as well, so this assumption is
broken. Fix it.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
That seem to imply we're running off the end of the VPD inquiry data
(although at 512 bytes, it should be long enough for just about
anything). we should be using correctly sized buffers anyway, so put
those in and hope this oops goes away.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch (as1121) fixes a bug in the USB serial core. When a device
is unregistered, the core will give back its minors -- even if the
device hasn't been assigned any!
The patch reserves the highest minor value (255) to mean that no minor
was assigned. It also removes some dead code and does a small style
fixup.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch (as1115) adds unusual_devs entries with the IGNORE_RESIDE
flag for the iRiver T10 and the Simple Tech/Datafab CF+SM card
reader. Apparently these devices provide reasonable residue values
for READ and WRITE operations, but not for others like INQUIRY or READ
CAPACITY.
This fixes the iRiver T10 problem reported in Bugzilla #11125.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
usb-storage: quirk around v1.11 firmware on Nikon D40
https://bugzilla.redhat.com/show_bug.cgi?id=454028
Just as in earlier firmware versions, we need to perform this
quirk for the latest version too.
Speculatively do the entry for the D80 too, as they seem to
have the same firmware problems historically.
Signed-off-by: Dave Jones <davej@redhat.com> Cc: Johannes Berg <johannes@sipsolutions.net> Cc: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
snd_seq_oss_synth_make_info() incorrectly reports information
to userspace without first checking for the validity of the
device number, leading to possible information leak (CVE-2008-3272).
Lookup can install a child dentry for a deleted directory. This keeps
the directory dentry alive, and the inode pinned in the cache and on
disk, even after all external references have gone away.
This isn't a big problem normally, since memory pressure or umount
will clear out the directory dentry and its children, releasing the
inode. But for UBIFS this causes problems because its orphan area can
overflow.
Fix this by returning ENOENT for all lookups on a S_DEAD directory
before creating a child dentry.
Thanks to Zoltan Sogor for noticing this while testing UBIFS, and
Artem for the excellent analysis of the problem and testing.
Don't create mixer volume elements for Headphone and Speaker if they
use the same DAC as normal line-outs on AD1988. Otherwise the amp
value gets screwed up, e.g.
https://bugzilla.novell.com/show_bug.cgi?id=398255
This patch (as1097) fixes a bug in the remote-wakeup handling in
ehci-hcd. The driver currently does not keep track of whether the
change-suspend feature is enabled for each port; the feature is
automatically reset the first time it is read. But recent changes to
the hub driver require that the feature be read at least twice in
order to work properly.
A bit-vector is added for storing the change-suspend feature values.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Acked-by: David Brownell <dbrownell@users.sourceforge.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
My laptop thinks that it's a good idea to give -73C as the critical
CPU temperature.... which isn't the best thing since it causes a shutdown
right at bootup.
Temperatures below freezing are clearly invalid critical thresholds
so just reject these as such.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Acked-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Current versions of gdb require a working implementation of
PTRACE_GETSIGINFO for proper watchpoint support. Since struct siginfo
contains pointers it must be converted when passed to a 32-bit debugger.
Signed-off-by: Andreas Schwab <schwab@suse.de> Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Hardware encryption doesn't work yet so lets use software
encryption for now.
Changes-licensed-under: 3-Clause-BSD
Signed-off-by: Luis R. Rodriguez <mcgrof@winlab.rutgers.edu> Signed-off-by: John W. Linville <linville@tuxdriver.com> Cc: Jiri Benc <jbenc@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
If acpi_install_notify_handler() for a bay device fails, the bay driver is
superfluous. Most likely, another driver (like libata) is already caring
about this device anyway. Furthermore,
register_hotplug_dock_device(acpi_handle) from the dock driver must not be
called twice with the same handler. This would result in an endless loop
consuming 100% of CPU. So clean up and exit.
Signed-off-by: Holger Macht <hmacht@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
When using the HIDP or BNEP kernel support, the user-space needs to
know if the connection has been terminated for some reasons. Wake up
the application if that happens. Otherwise kernel and user-space are
no longer on the same page and weird behaviors can happen.
I received a complaint that some FAT formated medias (e.g. sd memory cards)
trigger a "unknown partition table" message even though there is no partition
table and they work correctly, while in general (when e.g. formated with
mkdosfs or even Windows Vista) this message is not shown.
Currently this seems only to happen when the medias get formatted with Windows
XP (and possibly Win 2000). Then the boot indicator byte contains garbage
(part of text message) and so do the other parts checked by msdos_paritition
which then later triggers this message.
References: novell bug #364365
Most fat formatted media without partition table contains zeros in the boot
indication and the other tested bytes and so falls through the checks in
msdos_partition, leading it to return with 1 (all is fine).
But some (e.g. WinXP formatted) fat fomated medias don't use boot_ind and so
the check fails and causes a "unkown partition table" warning eventhough there
is none and everything would be fine.
This additional check directly verifies if there is a fat formatted medium
without a partition table.
Signed-off-by: Frank Seidel <fseidel@suse.de> Cc: Andreas Dilger <adilger@sun.com> Acked-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The on-disk media specification field in FAT is only 8-bits, so testing for
<=0xff is pointless, and can generate a "comparison is always true due to
limited range of data type" warning.
While we're there, convert FAT_VALID_MEDIA() into a C function - the present
implementation is buggy: it generates either one or two references to its
argument.
Cc: Frank Seidel <fseidel@suse.de> Acked-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Fujitsu Siemens Amilo Pro V2030 needs nomux table entry, in addition to
already existing entry for V2010 model (note that Fujitsu-Siemens changed
the capitalization in the DMI data for product).
This patch introduces i8042_dmi_nopnp_table to make it possible to perform
DMI matches for systems that need 'i8042.nopnp' to work correctly, and
introduces such an entry for Intel D845PESV -- this system doesn't
detect PS2 mouse reliably without this option, as reported by Robert
Lewis.
[dtor@mail.ru - make it compile if CONFIG_PNP is off - reported
by Randy Dunlap]
There are several cases where the running transaction can get buffers added to
its BJ_Metadata list which it never dirtied, which makes its t_nr_buffers
counter end up larger than its t_outstanding_credits counter.
This will cause issues when starting new transactions as while we are logging
buffers we decrement t_outstanding_buffers, so when t_outstanding_buffers goes
negative, we will report that we need less space in the journal than we
actually need, so transactions will be started even though there may not be
enough room for them. In the worst case scenario (which admittedly is almost
impossible to reproduce) this will result in the journal running out of space.
The fix is to only
refile buffers from the committing transaction to the running transactions
BJ_Modified list when b_modified is set on that journal, which is the only way
to be sure if the running transaction has modified that buffer.
This patch also fixes an accounting error in journal_forget, it is possible
that we can call journal_forget on a buffer without having modified it, only
gotten write access to it, so instead of freeing a credit, we only do so if
the buffer was modified. The assert will help catch if this problem occurs.
Without these two patches I could hit this assert within minutes of running
postmark, with them this issue no longer arises. Thank you,
Signed-off-by: Josef Bacik <jbacik@redhat.com> Cc: <linux-ext4@vger.kernel.org> Acked-by: Jan Kara <jack@ucw.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
journal_try_to_free_buffers() could race with jbd commit transaction when
the later is holding the buffer reference while waiting for the data
buffer to flush to disk. If the caller of journal_try_to_free_buffers()
request tries hard to release the buffers, it will treat the failure as
error and return back to the caller. We have seen the directo IO failed
due to this race. Some of the caller of releasepage() also expecting the
buffer to be dropped when passed with GFP_KERNEL mask to the
releasepage()->journal_try_to_free_buffers().
With this patch, if the caller is passing the __GFP_WAIT and __GFP_FS to
indicating this call could wait, in case of try_to_free_buffers() failed,
let's waiting for journal_commit_transaction() to finish commit the
current committing transaction, then try to free those buffers again.
[akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Mingming Cao <cmm@us.ibm.com> Reviewed-by: Badari Pulavarty <pbadari@us.ibm.com> Acked-by: Jan Kara <jack@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Currently at the start of a journal commit we loop through all of the buffers
on the committing transaction and clear the b_modified flag (the flag that is
set when a transaction modifies the buffer) under the j_list_lock.
The problem is that everywhere else this flag is modified only under the jbd
lock buffer flag, so it will race with a running transaction who could
potentially set it, and have it unset by the committing transaction.
This is also a big waste, you can have several thousands of buffers that you
are clearing the modified flag on when you may not need to. This patch
removes this code and instead clears the b_modified flag upon entering
do_get_write_access/journal_get_create_access, so if that transaction does
indeed use the buffer then it will be accounted for properly, and if it does
not then we know we didn't use it.
That will be important for the next patch in this series. Tested thoroughly
by myself using postmark/iozone/bonnie++.
Signed-off-by: Josef Bacik <jbacik@redhat.com> Cc: <linux-ext4@vger.kernel.org> Acked-by: Jan Kara <jack@ucw.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
In case when both EEXIST and EROFS would apply we used to
return the former in mkdir(2) and friends. Lest anyone suspects
us of being consistent, in the same situation knfsd gave clients
nfs_erofs...
ro-bind series had switched the syscall side of things to
returning -EROFS and immediately broke an application - namely,
mkdir -p. Patch restores the original behaviour...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Acked-by: Jan Blunck <jblunck@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Newer Dell CERC firmware (>= 6.62) implement a random deletion handling
compatible with the legacy megaraid driver. The legacy handling shifted
the target ID by 0x80 only for I/O commands (READ/WRITE/etc), whereas
megaraid_mbox shifts the target ID always if random deletion is supported.
The resulted in megaraid_mbox sending an INQUIRY to the wrong channel, and
not finding any devices, obviously.
So we disable the random deletion support if the offending firmware is
found.
Signed-off-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Bo Yang <Bo.Yang@lsi.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
We zero-fill them like we are supposed to, and that's all fine. It's
only an error if the 'romfs_copyfrom()' routine isn't able to fill the
data that is supposed to be there.
Most of the patch is really just re-organizing the code a bit, and using
separate variables for the error value and for how much of the page we
actually filled from the filesystem.
Reported-and-tested-by: Chris Fester <cfester@wms.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Matt Waddel <matt.waddel@freescale.com> Cc: Greg Ungerer <gerg@snapgear.com> Signed-of-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The iov_iter_advance() function would look at the iov->iov_len entry
even though it might have iterated over the whole array, and iov was
pointing past the end. This would cause DEBUG_PAGEALLOC to trigger a
kernel page fault if the allocation was at the end of a page, and the
next page was unallocated.
The quick fix is to just change the order of the tests: check that there
is any iovec data left before we check the iov entry itself.
Thanks to Alexey Dobriyan for finding this case, and testing the fix.
Reported-and-tested-by: Alexey Dobriyan <adobriyan@gmail.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
When a conntrack entry is destroyed in process context and destruction
is interrupted by packet processing and the packet is an attempt to
reopen a closed connection, TCP conntrack tries to kill the old entry
itself and returns NF_REPEAT to pass the packet through the hook
again. This may lead to an endless loop: TCP conntrack repeatedly
finds the old entry, but can not kill it itself since destruction
is already in progress, but destruction in process context can not
complete since TCP conntrack is keeping the CPU busy.
Drop the packet in TCP conntrack if we can't kill the connection
ourselves to avoid this.
Reported by: hemao77@gmail.com [ Kernel bugzilla #11058 ] Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
David Gibson [Fri, 18 Jul 2008 05:55:49 +0000 (15:55 +1000)]
Correct hash flushing from huge_ptep_set_wrprotect()
Correct hash flushing from huge_ptep_set_wrprotect() [stable tree version]
A fix for incorrect flushing of the hash page table at fork() for
hugepages was recently committed as 86df86424939d316b1f6cfac1b6204f0c7dee317. Without this fix, a process
can make a MAP_PRIVATE hugepage mapping, then fork() and have writes
to the mapping after the fork() pollute the child's version.
Unfortunately this bug also exists in the stable branch. In fact in
that case copy_hugetlb_page_range() from mm/hugetlb.c calls
ptep_set_wrprotect() directly, the hugepage variant hook
huge_ptep_set_wrprotect() doesn't even exist.
The patch below is a port of the fix to the stable25/master branch.
It introduces a huge_ptep_set_wrprotect() call, but this is #defined
to be equal to ptep_set_wrprotect() unless the arch defines its own
version and sets __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT.
This arch preprocessor flag is kind of nasty, but it seems the sanest
way to introduce this fix with minimum risk of breaking other archs
for whom prep_set_wprotect() is suitable for hugepages.
Signed-off-by: Andy Whitcroft <apw@shadowen.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
MSI is a nice thing, but we cannot enable it without changing the
interrupt handler. If we do it, we break MSI capable hardware,
specifically AR5006 chipset.
Signed-off-by: Pavel Roskin <proski@gnu.org> Acked-by: Nick Kossifidis <mickflemm@gmail.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The mutex is released on a successful return, so it would seem that it
should be released on an error return as well.
The semantic patch finds this problem is as follows:
(http://www.emn.fr/x-info/coccinelle/)
// <smpl>
@@
expression l;
@@
mutex_lock(l);
.. when != mutex_unlock(l)
when any
when strict
(
if (...) { ... when != mutex_unlock(l)
+ mutex_unlock(l);
return ...;
}
|
mutex_unlock(l);
)
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: Michael Buesch <mb@bu3sch.de> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>