Add a definition of the amount of TX headroom reserved by mac80211 itself
for its own purposes. Also add BUILD_BUG_ON to validate the value.
This define can then be used by drivers to request additional TX headroom
in the most efficient manner.
Signed-off-by: Gertjan van Wingerde <gwingerde@gmail.com> Acked-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: John W. Linville <linville@tuxdriver.com>
[bwh: Adjust context for 2.6.32] Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
virtio net will never try to overflow the TX ring, so the only reason
add_buf may fail is out of memory. Thus, we can not stop the
device until some request completes - there's no guarantee anything
at all is outstanding.
Make the error message clearer as well: error here does not
indicate queue full.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Under harsh testing conditions, including low memory, the guest would
stop receiving packets. With this patch applied we no longer see any
problems in the driver while performing these tests for extended periods
of time.
Make sure napi is scheduled subsequent to each napi_enable.
Signed-off-by: Bruce Rogers <brogers@novell.com> Signed-off-by: Olaf Kirch <okir@suse.de> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[bwh: Adjust for 2.6.32] Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
I've found the following patch is necessary to enable line-in on
my MacBookPro 5,3 machine. With the patch applied I've successfully
recorded audio from the line-in jack. This is based on the existing
5,5 support.
Add the iMac9,1 and the MacBookPro2,2 temperature sensors to hwmon
driver applesmc to fix kernel bug #14429:
https://bugzilla.kernel.org/show_bug.cgi?id=14429
Signed-off-by: Justin P. Mattock <justinmattock@gmail.com> Acked-by: Nicolas Boichat <nicolas@boichat.ch> Signed-off-by: Jean Delvare <khali@linux-fr.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Please add support for Microsoft MN-120 PCMCIA network card. It's an
old card, I know, but adding support is very easy. You just need to
get tulip_core.c to recognise its vendor/device ID.
Patch for kernel 2.6.32.4 (and many previous) attached.
.....Ron Murray
Signed-off-by: Ron Murray <rjmx@rjmx.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
It turns out that while a maximum of 8 partitions may be what people
"should" have had, you can actually fit up to 18 entries(*) in a sector.
And some people clearly were taking advantage of that, like Michael
Cree, who had ten partitions on one of his OSF disks.
(*) The OSF partition data starts at byte offset 64 in the first sector,
and the array of 16-byte partition entries start at offset 148 in
the on-disk partition structure.
The kernel automatically evaluates partition tables of storage devices.
The code for evaluating OSF partitions contains a bug that leaks data
from kernel heap memory to userspace for certain corrupted OSF
partitions.
In more detail:
for (i = 0 ; i < le16_to_cpu(label->d_npartitions); i++, partition++) {
iterates from 0 to d_npartitions - 1, where d_npartitions is read from
the partition table without validation and partition is a pointer to an
array of at most 8 d_partitions.
Add the proper and obvious validation.
Signed-off-by: Timo Warns <warns@pre-sense.de> Cc: stable@kernel.org
[ Changed the patch trivially to not repeat the whole le16_to_cpu()
thing, and to use an explicit constant for the magic value '8' ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
It occurs because an skb with a fraglist was freed from the tcp
retransmit queue when it was acked, but a page on that fraglist had
PG_Slab set (indicating it was allocated from the Slab allocator (which
means the free path above can't safely free it via put_page.
We tracked this back to an nfsv4 setacl operation, in which the nfs code
attempted to fill convert the passed in buffer to an array of pages in
__nfs4_proc_set_acl, which gets used by the skb->frags list in
xs_sendpages. __nfs4_proc_set_acl just converts each page in the buffer
to a page struct via virt_to_page, but the vfs allocates the buffer via
kmalloc, meaning the PG_slab bit is set. We can't create a buffer with
kmalloc and free it later in the tcp ack path with put_page, so we need
to either:
1) ensure that when we create the list of pages, no page struct has
PG_Slab set
or
2) not use a page list to send this data
Given that these buffers can be multiple pages and arbitrarily sized, I
think (1) is the right way to go. I've written the below patch to
allocate a page from the buddy allocator directly and copy the data over
to it. This ensures that we have a put_page free-able page for every
entry that winds up on an skb frag list, so it can be safely freed when
the frame is acked. We do a put page on each entry after the
rpc_call_sync call so as to drop our own reference count to the page,
leaving only the ref count taken by tcp_sendpages. This way the data
will be properly freed when the ack comes in
Successfully tested by myself to solve the above oops.
Note, as this is the result of a setacl operation that exceeded a page
of data, I think this amounts to a local DOS triggerable by an
uprivlidged user, so I'm CCing security on this as well.
When reusing a TCP connection, ensure that it's aborted if a previous
shutdown attempt has been made on that connection so that the RPC over
TCP recovery mechanism succeeds.
HighMem pages on i686 do not get mapped to the buffer_heads and this was
causing a NULL pointer dereference when we were trying to memset page buffers
to zero.
We now use zero_user() that kmaps the page and directly manipulates page data.
This patch also fixes a boundary condition that was incorrect.
Signed-off-by: Abhi Das <adas@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
[Adjusted to apply to 2.6.32 by dann frazier <dannf@debian.org>] Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This is the upstream fix for this bug. This patch differs
from the RHEL5 fix (Red Hat bz #555754) which simply writes to the 8-byte
value field of the quota. In upstream quota code, we're
required to write the entire quota (88 bytes) which can be split
across a page boundary. We check for such quotas, and read/write
the two parts from/to the corresponding pages holding these parts.
With this patch, I don't see the bug anymore using the reproducer
in Red Hat bz 555754. I successfully ran a couple of simple tests/mounts/
umounts and it doesn't seem like this patch breaks anything else.
Signed-off-by: Abhi Das <adas@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
[Backported to 2.6.32 by dann frazier <dannf@debian.org>] Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Both of these functions contained confusing and in one case
duplicate code. This patch adds a new check in do_glock()
so that we report -ENOENT if we are asked to sync a quota
entry which doesn't exist. Due to the previous patch this is
now reported correctly to userspace.
Also there are a few new comments, and I hope that the code
is easier to understand now.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
USB: teach "devices" file about Wireless and SuperSpeed USB
The /sys/kernel/debug/usb/devices file doesn't know about Wireless or
SuperSpeed USB. This patch (as1416b) teaches it, and updates the
Documentation/usb/proc_sub_info.txt file accordingly.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> CC: David Vrabel <david.vrabel@csr.com> CC: Sarah Sharp <sarah.a.sharp@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[Julien Blache: The original commit also added the correct speed for
USB_SPEED_WIRELESS, I removed it as it's not supported in 2.6.32.] Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch (as1364) avoids enabling remote wakeup by default on all
non-root-hub USB devices. Individual drivers or userspace will have
to enable it wherever it is needed, such as for keyboards or network
interfaces. Note: This affects only system sleep, not autosuspend.
External hubs will continue to relay wakeup requests received from
downstream through their upstream port, even when remote wakeup is not
enabled for the hub itself. Disabling remote wakeup on a hub merely
prevents it from generating wakeup requests in response to connect,
disconnect, and overcurrent events.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Currently a non-root-hub USB device's wakeup settings are initialized when the
device is set to a configured state using device_init_wakeup(), but this is not
correct as wakeup is split into "capable" (can_wakeup) and "enabled"
(should_wakeup). The settings should be initialized instead in the device
initialization (usb_new_device) with the "capable" setting disabled and the
"enabled" setting enabled. The "capable" setting should be set based on the
device being configured or unconfigured, and "enabled" setting set based on
the sysfs power/wakeup control.
This patch retains the sysfs power/wakeup setting of a non-root-hub USB device
over a USB device re-configuration, which can happen (for example) after a
suspend/resume cycle.
Signed-off-by: Dan Streetman <ddstreet@ieee.org> Cc: David Brownell <dbrownell@users.sourceforge.net> Cc: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[bwh: Adjust context for 2.6.32] Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch adds some device ids.
The list of supported devices was extracted from realteks driver package.
(0x050d, 0x815F) and (0x0df6, 0x004b) are not in the official list of
supported devices and may not work correctly.
In case of problems with these, they should probably be removed from the list.
This patch removes some device-ids.
The list of unsupported devices was extracted from realteks driver package.
removed IDs are:
(0x0bda, 0x8192)
(0x0bda, 0x8709)
(0x07aa, 0x0043)
(0x050d, 0x805E)
(0x0df6, 0x0031)
(0x1740, 0x9201)
(0x2001, 0x3301)
(0x5a57, 0x0290)
These devices are _not_ rtl819su based.
The current code creates directories in procfs named after interfaces,
but doesn't handle renaming. This can result in name collisions and
consequent WARNINGs. It also means that the interface name cannot
reliably be used to remove the directory - in fact the current code
doesn't even try, and always uses "wlan0"!
Since the name of a proc_dir_entry is embedded in it, use that when
removing it.
Add a netdev notifier to catch interface renaming, and remove and
re-add the directory at this point.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
According to the Dell/Ubuntu driver, what was previously observed as
"jumpy cursor" corresponds to the hardware sending incorrect data for
the first two reports of a one touch finger. So let's use the same
workaround as in the other driver. Also, detect another firmware
version with the same behaviour, as in the other driver.
Apparently there are Elantech touchpads that report non-zero in the 2nd byte
of their signature. Adjust the detection routine so that if 2nd byte is
zero and 3rd byte contains value that is not a valid report rate, we still
assume that signature is valid.
Tested-by: Eric Piel <eric.piel@tremplin-utc.net> Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
[bwh: Adjust context for 2.6.32] Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
In older versions of the elantech hardware/firmware those bits always
were unset, so it didn't actually matter, but newer versions seem to
use those high bits for something else, screwing up the coordinates
we report to the input layer for those devices.
Apparently hardware vendors now ship elantech touchpads with different version
magic. This options allows for them to be tested easier with the current driver
in order to add their magic to the whitelist later.
The check determining whether device should use 4- or 6-byte packets
was trying to compare firmware with 2.48, but was failing on majors
greater than 2. The new check ensures that versions like 4.1 are
checked properly.
The new type of touchpads can be detected via a new query command
0x0c. The clickpad flags are in cap[0]:4 and cap[1]:0 bits.
When the device is detected, the driver now reports only the left
button as the supported buttons so that X11 driver can detect that
the device is Clickpad. A Clickpad device gives the button events
only as the middle button. The kernel driver morphs to the left
button. The real handling of Clickpad is done rather in X driver
side.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Acked-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Replace run-time string formatting with preprocessor string
manipulation.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Acked-by: Divy Le Ray <divy@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Replace run-time string formatting with preprocessor string
manipulation.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Acked-by: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Add MODULE_FIRMWARE hints for various firmware file types,
required by different chip revisions.
Signed-off-by: Dhananjay Phadke <dhananjay@netxen.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The kernel automatically evaluates partition tables of storage devices.
The code for evaluating LDM partitions (in fs/partitions/ldm.c) contains
a bug that causes a kernel oops on certain corrupted LDM partitions.
A kernel subsystem seems to crash, because, after the oops, the kernel no
longer recognizes newly connected storage devices.
The patch validates the value of vblk_size.
[akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Timo Warns <warns@pre-sense.de> Cc: Eugene Teo <eugeneteo@kernel.sg> Cc: Harvey Harrison <harvey.harrison@gmail.com> Cc: Richard Russon <rich@flatcap.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
An open on a NFS4 share using the O_CREAT flag on an existing file for
which we have permissions to open but contained in a directory with no
write permissions will fail with EACCES.
A tcpdump shows that the client had set the open mode to UNCHECKED which
indicates that the file should be created if it doesn't exist and
encountering an existing flag is not an error. Since in this case the
file exists and can be opened by the user, the NFS server is wrong in
attempting to check create permissions on the parent directory.
The patch adds a conditional statement to check for create permissions
only if the file doesn't exist.
Signed-off-by: Sachin S. Prabhu <sprabhu@redhat.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The fix in commit 6b4e81db2552 ("i8k: Tell gcc that *regs gets
clobbered") to work around the gcc miscompiling i8k.c to add "+m
(*regs)" caused register pressure problems and a build failure.
Changing the 'asm' statement to 'asm volatile' instead should prevent
that and works around the gcc bug as well, so we can remove the "+m".
[ Background on the gcc bug: a memory clobber fails to mark the function
the asm resides in as non-pure (aka "__attribute__((const))"), so if
the function does nothing else that triggers the non-pure logic, gcc
will think that that function has no side effects at all. As a result,
callers will be mis-compiled.
Adding the "+m" made gcc see that it's not a pure function, and so
does "asm volatile". The problem was never really the need to mark
"*regs" as changed, since the memory clobber did that part - the
problem was just a bug in the gcc "pure" function analysis - Linus ]
Signed-off-by: Jim Bos <jim876@xs4all.nl> Acked-by: Jakub Jelinek <jakub@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
More recent GCC caused the i8k driver to stop working, on Slackware
compiler was upgraded from gcc-4.4.4 to gcc-4.5.1 after which it didn't
work anymore, meaning the driver didn't load or gave total nonsensical
output.
As it turned out the asm(..) statement forgot to mention it modifies the
*regs variable.
Credits to Andi Kleen and Andreas Schwab for providing the fix.
Signed-off-by: Jim Bos <jim876@xs4all.nl> Cc: Andi Kleen <andi@firstfloor.org> Cc: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
When CONFIG_OABI_COMPAT is set, the wrapper for semtimedop does not
bound the nsops argument. A sufficiently large value will cause an
integer overflow in allocation size, followed by copying too much data
into the allocated buffer. Fix this by restricting nsops to SEMOPM.
Untested.
Signed-off-by: Dan Rosenberg <drosenberg@vsecurity.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This fixes the following oops discovered by Dan Aloni:
> Anyway, the following is the output of the Oops that I got on the
> Ubuntu kernel on which I first detected the problem
> (2.6.37-12-generic). The Oops that followed will be more useful, I
> guess.
The bug was that unix domain sockets use a pseduo packet for
connecting and accept uses that psudo packet to get the socket.
In the buggy seqpacket case we were allowing unconnected
sockets to call recvmsg and try to receive the pseudo packet.
That is always wrong and as of commit 7361c36c5 the pseudo
packet had become enough different from a normal packet
that the kernel started oopsing.
Do for seqpacket_recv what was done for seqpacket_send in 2.5
and only allow it on connected seqpacket sockets.
Tested-by: Dan Aloni <dan@aloni.org> Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Older AMD K8 processors (Revisions A-E) are affected by erratum
400 (APIC timer interrupts don't occur in C states greater than
C1). This, for example, means that X86_FEATURE_ARAT flag should
not be set for these parts.
This addresses regression introduced by commit b87cf80af3ba4b4c008b4face3c68d604e1715c6 ("x86, AMD: Set ARAT
feature on AMD processors") where the system may become
unresponsive until external interrupt (such as keyboard input)
occurs. This results, for example, in time not being reported
correctly, lack of progress on the system and other lockups.
This patch (as1460) fixes a regression in the usbip driver caused by
the new check for Transaction Translators in USB-2 hubs. The root hub
registered by vhci_hcd needs to have the has_tt flag set, because it
can connect to low- and full-speed devices as well as high-speed
devices.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Reported-and-tested-by: Nikola Ciprich <nikola.ciprich@linuxbox.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
It seems that under certain circumstances the sdhci_tasklet_finish()
call can be entered with mrq set to NULL, causing the system to crash
with a NULL pointer de-reference.
Seen on S3C6410 system. Based on a patch by Dimitris Papastamos.
It seems that under certain circumstances that the sdhci_tasklet_finish()
call can be entered with mrq->cmd set to NULL, causing the system to crash
with a NULL pointer de-reference.
Unable to handle kernel NULL pointer dereference at virtual address 00000000
PC is at sdhci_tasklet_finish+0x34/0xe8
LR is at sdhci_tasklet_finish+0x24/0xe8
Seen on S3C6410 system.
Signed-off-by: Ben Dooks <ben-linux@fluff.org> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Chris Ball <cjb@laptop.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
If pci_ioremap_bar() fails during probe, we "goto release;" and free the
host, but then we return 0 -- which tells sdhci_pci_probe() that the probe
succeeded. Since we think the probe succeeded, when we unload sdhci we'll
go to sdhci_pci_remove_slot() and it will try to dereference slot->host,
which is now NULL because we freed it in the error path earlier.
The patch simply sets ret appropriately, so that sdhci_pci_probe() will
detect the failure immediately and bail out.
Signed-off-by: Chris Ball <cjb@laptop.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
SCSI uses request_queue->queuedata == NULL as a signal that the queue
is dying. We set this state in the sdev release function. However,
this allows a small window where we release the last reference but
haven't quite got to this stage yet and so something will try to take
a reference in scsi_request_fn and oops. It's very rare, but we had a
report here, so we're pushing this as a bug fix
The actual fix is to set request_queue->queuedata to NULL in
scsi_remove_device() before we drop the reference. This causes
correct automatic rejects from scsi_request_fn as people who hold
additional references try to submit work and prevents anything from
getting a new reference to the sdev that way.
Signed-off-by: James Bottomley <James.Bottomley@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
At two points in handling device ioctls via /dev/mpt2ctl, user-supplied
length values are used to copy data from userspace into heap buffers
without bounds checking, allowing controllable heap corruption and
subsequently privilege escalation.
Additionally, user-supplied values are used to determine the size of a
copy_to_user() as well as the offset into the buffer to be read, with no
bounds checking, allowing users to read arbitrary kernel memory.
Signed-off-by: Dan Rosenberg <drosenberg@vsecurity.com> Acked-by: Eric Moore <eric.moore@lsi.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
There's a code path in pmcraid that can be reached via device ioctl that
causes all sorts of ugliness, including heap corruption or triggering
the OOM killer due to consecutive allocation of large numbers of pages.
Not especially relevant from a security perspective, since users must
have CAP_SYS_ADMIN to open the character device.
First, the user can call pmcraid_chr_ioctl() with a type
PMCRAID_PASSTHROUGH_IOCTL. A pmcraid_passthrough_ioctl_buffer
is copied in, and the request_size variable is set to
buffer->ioarcb.data_transfer_length, which is an arbitrary 32-bit signed
value provided by the user.
If a negative value is provided here, bad things can happen. For
example, pmcraid_build_passthrough_ioadls() is called with this
request_size, which immediately calls pmcraid_alloc_sglist() with a
negative size. The resulting math on allocating a scatter list can
result in an overflow in the kzalloc() call (if num_elem is 0, the
sglist will be smaller than expected), or if num_elem is unexpectedly
large the subsequent loop will call alloc_pages() repeatedly, a high
number of pages will be allocated and the OOM killer might be invoked.
Prevent this value from being negative in pmcraid_ioctl_passthrough().
Signed-off-by: Dan Rosenberg <drosenberg@vsecurity.com> Cc: Anil Ravindranath <anil_ravindranath@pmc-sierra.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Mouse gets "stuck" after restore of PV guest but buttons are in working
condition.
If driver has been configured for ABS coordinates at start it will get
XENKBD_TYPE_POS events and then suddenly after restore it'll start getting
XENKBD_TYPE_MOTION events, that will be dropped later and they won't get
into user-space.
Regression was introduced by hunk 5 and 6 of 5ea5254aa0ad269cfbd2875c973ef25ab5b5e9db
("Input: xen-kbdfront - advertise either absolute or relative
coordinates").
Driver on restore should ask xen for request-abs-pointer again if it is
available. So restore parts that did it before 5ea5254.
Acked-by: Olaf Hering <olaf@aepfle.de> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
[v1: Expanded the commit description] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
page_count is copied from userspace. agp_allocate_memory() tries to
check whether this number is too big, but doesn't take into account the
wrap case. Also agp_create_user_memory() doesn't check whether
alloc_size is calculated from num_agp_pages variable without overflow.
This may lead to allocation of too small buffer with following buffer
overflow.
Another problem in agp code is not addressed in the patch - kernel memory
exhaustion (AGPIOC_RESERVE and AGPIOC_ALLOCATE ioctls). It is not checked
whether requested pid is a pid of the caller (no check in agpioc_reserve_wrap()).
Each allocation is limited to 16KB, though, there is no per-process limit.
This might lead to OOM situation, which is not even solved in case of the
caller death by OOM killer - the memory is allocated for another (faked) process.
pg_start is copied from userspace on AGPIOC_BIND and AGPIOC_UNBIND ioctl
cmds of agp_ioctl() and passed to agpioc_bind_wrap(). As said in the
comment, (pg_start + mem->page_count) may wrap in case of AGPIOC_BIND,
and it is not checked at all in case of AGPIOC_UNBIND. As a result, user
with sufficient privileges (usually "video" group) may generate either
local DoS or privilege escalation.
On a remount, the VFS layer will clear the MS_SYNCHRONOUS bit on the
assumption that the flags on the mount syscall will have it set if the
remounted fs is supposed to keep it.
In the case of "noac" though, MS_SYNCHRONOUS is implied. A remount of
such a mount will lose the MS_SYNCHRONOUS flag since "sync" isn't part
of the mount options.
Reported-by: Max Matveev <makc@redhat.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
For m68k, N_NORMAL_MEMORY represents all nodes that have present memory
since it does not support HIGHMEM. This patch sets the bit at the time
node_present_pages has been set by free_area_init_node.
At the time the node is brought online, the node state would have to be
done unconditionally since information about present memory has not yet
been recorded.
If N_NORMAL_MEMORY is not accurate, slub may encounter errors since it
uses this nodemask to setup per-cache kmem_cache_node data structures.
This pach is an alternative to the one proposed by David Rientjes
<rientjes@google.com> attempting to set node state immediately when
bringing the node online.
This patch fixes the warning about bad names for sys-fs and other kernel-things. The flexcop-pci driver was using '/'-characters in it, which is not good.
This has been fixed in several attempts by several people, but obviously never made it into the kernel.
When a DISCONTIGMEM memory range is brought online as a NUMA node, it
also needs to have its bet set in N_NORMAL_MEMORY. This is necessary for
generic kernel code that utilizes N_NORMAL_MEMORY as a subset of N_ONLINE
for memory savings.
These types of hacks can hopefully be removed once DISCONTIGMEM is either
removed or abstracted away from CONFIG_NUMA.
Fixes a panic in the slub code which only initializes structures for
N_NORMAL_MEMORY to save memory:
Slub makes assumptions about page_to_nid() which are violated by
DISCONTIGMEM and !NUMA. This violation results in a panic because
page_to_nid() can be non-zero for pages in the discontiguous ranges and
this leads to a null return by get_node(). The assertion by the
maintainer is that DISCONTIGMEM should only be allowed when NUMA is also
defined. However, at least six architectures: alpha, ia64, m32r, m68k,
mips, parisc violate this. The panic is a regression against slab, so
just mark slub broken in the problem configuration to prevent users
reporting these panics.
Acked-by: David Rientjes <rientjes@google.com> Acked-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: James Bottomley <James.Bottomley@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
It has been reported that the new UFO software fallback path
fails under certain conditions with NFS. I tracked the problem
down to the generation of UFO packets that are smaller than the
MTU. The software fallback path simply discards these packets.
This patch fixes the problem by not generating such packets on
the UFO path.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Cc: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
remove duplicate atl1c_get_tpd, it may cause hardware to send wrong packets.
Signed-off-by: Jie Yang <jie.yang@atheros.com> Signed-off-by: David S. Miller <davem@davemloft.net> Cc: Willy Tarreau <w@1wt.eu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The 3880 storage control unit supports a 3380 device
type, but not a 3390 device type.
Reported-by: Stephen Powell <zlinuxman@wowway.com> Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Stephen Powell <zlinuxman@wowway.com> Cc: Jonathan Nieder <jrnieder@gmail.com> Cc: Bastian Blank <waldi@debian.org>
This patch fixes the following symptoms:
1. Unmount UBIFS cleanly.
2. Start mounting UBIFS R/W and have a power cut immediately
3. Start mounting UBIFS R/O, this succeeds
4. Try to re-mount UBIFS R/W - this fails immediately or later on,
because UBIFS will write the master node to the flash area
which has been written before.
The analysis of the problem:
1. UBIFS is unmounted cleanly, both copies of the master node are clean.
2. UBIFS is being mounter R/W, starts changing master node copy 1, and
a power cut happens. The copy N1 becomes corrupted.
3. UBIFS is being mounted R/O. It notices the copy N1 is corrupted and
reads copy N2. Copy N2 is clean.
4. Because of R/O mode, UBIFS cannot recover copy 1.
5. The mount code (ubifs_mount()) sees that the master node is clean,
so it decides that no recovery is needed.
6. We are re-mounting R/W. UBIFS believes no recovery is needed and
starts updating the master node, but copy N1 is still corrupted
and was not recovered!
Fix this problem by marking the master node as dirty every time we
recover it and we are in R/O mode. This forces further recovery and
the UBIFS cleans-up the corruptions and recovers the copy N1 when
re-mounting R/W later.
Commit 40aee729b350 ('kconfig: fix default value for choice input')
fixed some cases where kconfig would select the wrong option from a
choice with a single valid option and thus enter an infinite loop.
However, this broke the test for user input of the form 'N?', because
when kconfig selects the single valid option the input is zero-length
and the test will read the byte before the input buffer. If this
happens to contain '?' (as it will in a mips build on Debian unstable
today) then kconfig again enters an infinite loop.
If cts changes between reading the level at the cts input (USR1_RTSS)
and acking the irq (USR1_RTSD) the last edge doesn't generate an irq and
uart_handle_cts_change is called with a outdated value for cts.
If the call to nfs_wcc_update_inode() results in an attribute update, we
need to ensure that the inode's attr_gencount gets bumped too, otherwise
we are not protected against races with other GETATTR calls.
If we run out of domain_ids and fail iommu_attach_domain(), we
fall into domain_exit() without having setup enough of the
domain structure for this to do anything useful. In fact, it
typically runs off into the weeds walking the bogus domain->devices
list. Just free the domain.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Acked-by: Donald Dutile <ddutile@redhat.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
When we remove a device, we unlink the iommu from the domain, but
we never do the reverse unlinking of the domain from the iommu.
This means that we never clear iommu->domain_ids, eventually leading
to resource exhaustion if we repeatedly bind and unbind a device
to a driver. Also free empty domains to avoid a resource leak.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Acked-by: Donald Dutile <ddutile@redhat.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch fixes a very serious off-by-one bug in
the driver, which could leave the device in an
unresponsive state.
The problem was that the extra_len variable [used to
reserve extra scratch buffer space for the firmware]
was left uninitialized. Because p54_assign_address
later needs the value to reserve additional space,
the resulting frame could be to big for the small
device's memory window and everything would
immediately come to a grinding halt.
Reference: https://bugs.launchpad.net/bugs/722185
Acked-by: Christian Lamparter <chunkeey@googlemail.com> Signed-off-by: Jason Conti <jason.conti@gmail.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Joe Culler reported a problem with his AR9170 device:
> ath: EEPROM regdomain: 0x5c
> ath: EEPROM indicates we should expect a direct regpair map
> ath: invalid regulatory domain/country code 0x5c
> ath: Invalid EEPROM contents
It turned out that the regdomain 'APL7_FCCA' was not mapped yet.
According to Luis R. Rodriguez [Atheros' engineer] APL7 maps to
FCC_CTL and FCCA maps to FCC_CTL as well, so the attached patch
should be correct.
Reported-by: Joe Culler <joe.culler@gmail.com> Acked-by: Luis R. Rodriguez <lrodriguez@atheros.com> Signed-off-by: Christian Lamparter <chunkeey@googlemail.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
As reported by Thomas Pollet, the rdma page counting can overflow. We
get the rdma sizes in 64-bit unsigned entities, but then limit it to
UINT_MAX bytes and shift them down to pages (so with a possible "+1" for
an unaligned address).
So each individual page count fits comfortably in an 'unsigned int' (not
even close to overflowing into signed), but as they are added up, they
might end up resulting in a signed return value. Which would be wrong.
Catch the case of tot_pages turning negative, and return the appropriate
error code.
Reported-by: Thomas Pollet <thomas.pollet@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
[v2: nr is unsigned in the old code] Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Tim Gardner <tim.gardner@canonical.com> Acked-by: Brad Figg <brad.figg@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
A bug in the family-model-stepping matching code caused the presence of
errata to go undetected when OSVW was not used. This causes hangs on
some K8 systems because the E400 workaround is not enabled.
Signed-off-by: Hans Rosenfeld <hans.rosenfeld@amd.com>
LKML-Reference: <1282141190-930137-1-git-send-email-hans.rosenfeld@amd.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
When parsing exponent-expressed intervals we subtract 1 from the
value and then expect it to match with original + 1, which is
highly unlikely, and we end with frequent spew:
usb 3-4: ep 0x83 - rounding interval to 512 microframes
Also, parsing interval for fullspeed isochronous endpoints was
incorrect - according to USB spec they use exponent-based
intervals (but xHCI spec claims frame-based intervals). I trust
USB spec more, especially since USB core agrees with it.
This should be queued for stable kernels back to 2.6.31.
Reviewed-by: Micah Elizabeth Scott <micah@vmware.com> Signed-off-by: Dmitry Torokhov <dtor@vmware.com> Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch (as1458) fixes a problem affecting ultra-reliable systems:
When hardware failover of an EHCI controller occurs, the data
structures do not get released correctly. This is because the routine
responsible for removing unused QHs from the async schedule assumes
the controller is running properly (the frame counter is used in
determining how long the QH has been idle) -- but when a failover
causes the controller to be electronically disconnected from the PCI
bus, obviously it stops running.
The solution is simple: Allow scan_async() to remove a QH from the
async schedule if it has been idle for long enough _or_ if the
controller is stopped.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Reported-and-Tested-by: Dan Duval <dan.duval@stratus.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
next_pidmap() just quietly accepted whatever 'last' pid that was passed
in, which is not all that safe when one of the users is /proc.
Admittedly the proc code should do some sanity checking on the range
(and that will be the next commit), but that doesn't mean that the
helper functions should just do that pidmap pointer arithmetic without
checking the range of its arguments.
So clamp 'last' to PID_MAX_LIMIT. The fact that we then do "last+1"
doesn't really matter, the for-loop does check against the end of the
pidmap array properly (it's only the actual pointer arithmetic overflow
case we need to worry about, and going one bit beyond isn't going to
overflow).
[ Use PID_MAX_LIMIT rather than pid_max as per Eric Biederman ]
Reported-by: Tavis Ormandy <taviso@cmpxchg8b.com> Analyzed-by: Robert Święcki <robert@swiecki.net> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch, adds to the option driver the Onda Communication
(http://www.ondacommunication.com) vendor id, and the MT825UP modem
device id.
Note that many variants of this same device are being release here in
Italy (at least one or two per telephony operator).
These devices are perfectly equivalent except for some predefined
settings (which can be changed of course).
It should be noted that most ONDA devices are allready supported (they
used other vendor's ids in the past). The patch seems working fine here,
and the rest of the driver seems uninfluenced.