The current code checks for stored_mpdu_num > 1, causing
the reorder_timer to be triggered indefinitely, but the
frame is never timed-out (until the next packet is received)
Signed-off-by: Eliad Peller <eliad@wizery.com> Acked-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=44263 Acked-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Keith Packard <keithp@keithp.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Correct sk_forward_alloc handling for error_queue would need to use a
backlog of frames that softirq handler could not deliver because socket
is owned by user thread. Or extend backlog processing to be able to
process normal and error packets.
Another possibility is to not use mem charge for error queue, this is
what I implemented in this patch.
Note: this reverts commit 29030374
(net: fix sk_forward_alloc corruptions), since we dont need to lock
socket anymore.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Cc: 单卫 <shanwei88@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
As David found out, sock_queue_err_skb() should be called with socket
lock hold, or we risk sk_forward_alloc corruption, since we use non
atomic operations to update this field.
This patch adds bh_lock_sock()/bh_unlock_sock() pair to three spots.
(BH already disabled)
1) skb_tstamp_tx()
2) Before calling ip_icmp_error(), in __udp4_lib_err()
3) Before calling ipv6_icmp_error(), in __udp6_lib_err()
Reported-by: Anton Blanchard <anton@samba.org> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Cc: 单卫 <shanwei88@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Axel Lin <axel.lin@gmail.com> Acked-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The Netlogic XLP SoC's on-chip USB controller appears as a PCI
USB device, but does not need the EHCI/OHCI handoff done in
usb/host/pci-quirks.c.
The pci-quirks.c is enabled for all vendors and devices, and is
enabled if USB and PCI are configured.
If we do not skip the qurik handling on XLP, the readb() call in
ehci_bios_handoff() will cause a crash since byte access is not
supported for EHCI registers in XLP.
Signed-off-by: Jayachandran C <jayachandranc@netlogicmicro.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
ab943a2e125b (USB: gadget: gadget zero uses new suspend/resume hooks)
introduced a copy-paste error where f_loopback.c writes to a variable
declared in f_sourcesink.c. This prevents one from creating gadgets
that only have a loopback function.
Signed-off-by: Timo Juhani Lindfors <timo.lindfors@iki.fi> Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Opening the binder driver and sharing the file returned with
other processes (e.g. by calling fork) can crash the kernel.
Prevent these crashes with the following changes:
- Add a mutex to protect against two processes mmapping the
same binder_proc.
- After locking mmap_sem, check that the vma we want to access
(still) points to the same mm_struct.
- Use proc->tsk instead of current to get the files struct since
this is where we get the rlimit from.
If user-space partially unmaps the driver, binder_vma_open
would dump the kernel stack. This is not a kernel bug however
and will be treated as if the whole area was unmapped once
binder_vma_close gets called.
Fix the image processing by special-casing val == 0.
I have tested this change on an Asus G50V laptop only.
Cc: Jakub Schmidtke <sjakub@gmail.com> Cc: Kevin A. Granade <kevin.granade@gmail.com> Signed-off-by: Pekka Paalanen <pq@iki.fi> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
ecryptfs_write() can enter an infinite loop when truncating a file to a
size larger than 4G. This only happens on architectures where size_t is
represented by 32 bits.
This was caused by a size_t overflow due to it incorrectly being used to
store the result of a calculation which uses potentially large values of
type loff_t.
[tyhicks@canonical.com: rewrite subject and commit message] Signed-off-by: Li Wang <liwang@nudt.edu.cn> Signed-off-by: Yunchuan Wen <wenyunchuan@kylinos.com.cn> Reviewed-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
TV Out refresh rate was half of the specification for almost all modes.
Due to this reason pixel clock was so low for some modes causing flickering screen.
When we hit EIO while writing LVID, the buffer uptodate bit is cleared.
This then results in an anoying warning from mark_buffer_dirty() when we
write the buffer again. So just set uptodate flag unconditionally.
Reviewed-by: Namjae Jeon <linkinjeon@gmail.com> Signed-off-by: Jan Kara <jack@suse.cz> Cc: Dave Jones <davej@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
If NFSv4 client send a request before connect, or the old connection was broken
because a ETIMEOUT error catched by call_status, ->send_request will return
ENOSOCK, but rpc layer can not deal with it, so make sure ->send_request can
translate ENOSOCK into ENOCONN.
NFSv4 open recovery is currently broken: since we do not clear the
state->flags states before attempting recovery, we end up with the
'can_open_cached()' function triggering. This again leads to no OPEN call
being put on the wire.
nfs4_recovery_handle_error() will correctly handle errors such as
NFS4ERR_CB_PATH_DOWN, however because they are still passed back to the
main loop in nfs4_state_manager(), they can cause the latter to exit
prematurely.
Fix this by letting nfs4_recovery_handle_error() change the error value in
cases where there is no action required by the caller.
Fix a race condition that shows in conjunction with xip_file_fault() when
two threads of the same user process fault on the same memory page.
In this case, the race winner will install the page table entry and the
unlucky loser will cause an oops: xip_file_fault calls vm_insert_pfn (via
vm_insert_mixed) which drops out at this check:
retval = -EBUSY;
if (!pte_none(*pte))
goto out_unlock;
The resulting -EBUSY return value will trigger a BUG_ON() in
xip_file_fault.
This fix simply considers the fault as fixed in this case, because the
race winner has successfully installed the pte.
[akpm@linux-foundation.org: use conventional (and consistent) comment layout] Reported-by: David Sadler <dsadler@us.ibm.com> Signed-off-by: Carsten Otte <cotte@de.ibm.com> Reported-by: Louis Alex Eisner <leisner@cs.ucsd.edu> Cc: Hugh Dickins <hughd@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In the current code, vendor-specific MADs (e.g with the FDR-10
attribute) are silently dropped by the driver, resulting in timeouts
at the sending side and inability to query/configure the relevant
feature. However, the ConnectX firmware is able to handle such MADs.
For unsupported attributes, the firmware returns a GET_RESPONSE MAD
containing an error status.
For example, for a FDR-10 node with LID 11:
# ibstat mlx4_0 1
CA: 'mlx4_0'
Port 1:
State: Active
Physical state: LinkUp
Rate: 40 (FDR10)
Base lid: 11
LMC: 0
SM lid: 24
Capability mask: 0x02514868
Port GUID: 0x0002c903002e65d1
Link layer: InfiniBand
Extended Port Query (EPI) vendor mad timeouts before the patch:
# smpquery MEPI 11 -d
ibwarn: [4196] smp_query_via: attr 0xff90 mod 0x0 route Lid 11
ibwarn: [4196] _do_madrpc: retry 1 (timeout 1000 ms)
ibwarn: [4196] _do_madrpc: retry 2 (timeout 1000 ms)
ibwarn: [4196] _do_madrpc: timeout after 3 retries, 3000 ms
ibwarn: [4196] mad_rpc: _do_madrpc failed; dport (Lid 11)
smpquery: iberror: [pid 4196] main: failed: operation EPI: ext port info query failed
EPI query works OK with the patch:
# smpquery MEPI 11 -d
ibwarn: [6548] smp_query_via: attr 0xff90 mod 0x0 route Lid 11
ibwarn: [6548] mad_rpc: data offs 64 sz 64
mad data
0000 0000 0000 0001 0000 0001 0000 0001
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
# Ext Port info: Lid 11 port 0
StateChangeEnable:...............0x00
LinkSpeedSupported:..............0x01
LinkSpeedEnabled:................0x01
LinkSpeedActive:.................0x01
Signed-off-by: Jack Morgenstein <jackm@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Acked-by: Ira Weiny <weiny2@llnl.gov> Signed-off-by: Roland Dreier <roland@purestorage.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Fix regression introduced by commit b1ffb4c851f1 ("USB: Fix Corruption
issue in USB ftdi driver ftdi_sio.c") which caused the termios settings
to no longer be initialised at open. Consequently it was no longer
possible to set the port to the default speed of 9600 baud without first
changing to another baud rate and back again.
Reported-by: Roland Ramthun <mail@roland-ramthun.de> Signed-off-by: Johan Hovold <jhovold@gmail.com> Tested-by: Roland Ramthun <mail@roland-ramthun.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Properly clamp temperature limits set by the user. Without this fix,
attempts to write temperature limits above the maximum supported by
the chip (255 degrees Celsius) would arbitrarily and unexpectedly
result in the limit being set to 0 degree Celsius.
This changes the max length for the usb seven segment delcom device to 8
from 6. Delcom has both 6 and 8 variants and having 8 works fine with
devices which are only 6.
Signed-off-by: Harrison Metzger <harrisonmetz@gmail.com> Signed-off-by: Stuart Pook <stuart@acm.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Calling edge_remove_sysfs_attrs from edge_disconnect is too late
as the device has already been removed from sysfs.
Do the simple and obvious thing and make edge_remove_sysfs_attrs
the port_remove method.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Reported-by: Wolfgang Frisch <wfpub@roembden.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Return EINVAL if new baud_base does not match the current one.
The baud_base is device specific and can not be changed. This restores
the old (pre-2005) behaviour which was changed due to a
misunderstanding regarding this fact (see
https://lkml.org/lkml/2005/1/20/84).
[ Changes with respect to 3.3: return -ENOTTY from scsi_verify_blk_ioctl
and -ENOIOCTLCMD from sd_compat_ioctl. ]
Linux allows executing the SG_IO ioctl on a partition or LVM volume, and
will pass the command to the underlying block device. This is
well-known, but it is also a large security problem when (via Unix
permissions, ACLs, SELinux or a combination thereof) a program or user
needs to be granted access only to part of the disk.
This patch lets partitions forward a small set of harmless ioctls;
others are logged with printk so that we can see which ioctls are
actually sent. In my tests only CDROM_GET_CAPABILITY actually occurred.
Of course it was being sent to a (partition on a) hard disk, so it would
have failed with ENOTTY and the patch isn't changing anything in
practice. Still, I'm treating it specially to avoid spamming the logs.
In principle, this restriction should include programs running with
CAP_SYS_RAWIO. If for example I let a program access /dev/sda2 and
/dev/sdb, it still should not be able to read/write outside the
boundaries of /dev/sda2 independent of the capabilities. However, for
now programs with CAP_SYS_RAWIO will still be allowed to send the
ioctls. Their actions will still be logged.
This patch does not affect the non-libata IDE driver. That driver
however already tests for bd != bd->bd_contains before issuing some
ioctl; it could be restricted further to forbid these ioctls even for
programs running with CAP_SYS_ADMIN/CAP_SYS_RAWIO.
Cc: linux-scsi@vger.kernel.org Cc: Jens Axboe <axboe@kernel.dk> Cc: James Bottomley <JBottomley@parallels.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[ Make it also print the command name when warning - Linus ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[bwh: Backport to 2.6.32 - ENOIOCTLCMD does not get converted to
ENOTTY, so we must return ENOTTY directly] Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Revert "ARM: 7220/1: mmc: mmci: Fixup error handling for dma"
This reverts commit c8cdf3f97d34d906e0519d5cbc2ab3f81269d0b4, applied on
linux 2.6.32.53 stable release, as it can introduce the following build
error while building 2.6.32.y on armel:
linux-2.6.32/drivers/mmc/host/mmci.c: In function 'mmci_cmd_irq':
linux-2.6.32/drivers/mmc/host/mmci.c:237: error: implicit declaration of function 'dma_inprogress'
linux-2.6.32/drivers/mmc/host/mmci.c:238: error: implicit declaration of function 'mmci_dma_data_error'
Aparently the commit was wrongly pushed into 2.6.32, since it depends on
commit c8ebae37 ("ARM: mmci: add dmaengine-based DMA support"), not
present on 2.6.32.
Signed-off-by: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
For rounds 16--79, W[i] only depends on W[i - 2], W[i - 7], W[i - 15] and W[i - 16].
Consequently, keeping all W[80] array on stack is unnecessary,
only 16 values are really needed.
Using W[16] instead of W[80] greatly reduces stack usage
(~750 bytes to ~340 bytes on x86_64).
Line by line explanation:
* BLEND_OP
array is "circular" now, all indexes have to be modulo 16.
Round number is positive, so remainder operation should be
without surprises.
* initial full message scheduling is trimmed to first 16 values which
come from data block, the rest is calculated before it's needed.
* original loop body is unrolled version of new SHA512_0_15 and
SHA512_16_79 macros, unrolling was done to not do explicit variable
renaming. Otherwise it's the very same code after preprocessing.
See sha1_transform() code which does the same trick.
Patch survives in-tree crypto test and original bugreport test
(ping flood with hmac(sha512).
See FIPS 180-2 for SHA-512 definition
http://csrc.nist.gov/publications/fips/fips180-2/fips180-2withchangenotice.pdf
If sha512_update will ever be entered twice, hash will be silently
calculated incorrectly.
Probably the easiest way to notice incorrect hashes being calculated is
to run 2 ping floods over AH with hmac(sha512):
#!/usr/sbin/setkey -f
flush;
spdflush;
add IP1 IP2 ah 25 -A hmac-sha512 0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000025;
add IP2 IP1 ah 52 -A hmac-sha512 0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000052;
spdadd IP1 IP2 any -P out ipsec ah/transport//require;
spdadd IP2 IP1 any -P in ipsec ah/transport//require;
XfrmInStateProtoError will start ticking with -EBADMSG being returned
from ah_input(). This never happens with, say, hmac(sha1).
With patch applied (on BOTH sides), XfrmInStateProtoError does not tick
with multiple bidirectional ping flood streams like it doesn't tick
with SHA-1.
After this patch sha512_transform() will start using ~750 bytes of stack on x86_64.
This is OK for simple loads, for something more heavy, stack reduction will be done
separatedly.
If the master tries to authenticate a client using drm_authmagic and
that client has already closed its drm file descriptor,
either wilfully or because it was terminated, the
call to drm_authmagic will dereference a stale pointer into kmalloc'ed memory
and corrupt it.
Typically this results in a hard system hang.
This patch fixes that problem by removing any authentication tokens
(struct drm_magic_entry) open for a file descriptor when that file
descriptor is closed.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
ecryptfs_write() handles the truncation of eCryptfs inodes. It grabs a
page, zeroes out the appropriate portions, and then encrypts the page
before writing it to the lower filesystem. It was unkillable and due to
the lack of sparse file support could result in tying up a large portion
of system resources, while encrypting pages of zeros, with no way for
the truncate operation to be stopped from userspace.
This patch adds the ability for ecryptfs_write() to detect a pending
fatal signal and return as gracefully as possible. The intent is to
leave the lower file in a useable state, while still allowing a user to
break out of the encryption loop. If a pending fatal signal is detected,
the eCryptfs inode size is updated to reflect the modified inode size
and then -EINTR is returned.
Print inode on metadata read failure. The only real
way of dealing with metadata read failures is to delete
the underlying file system file. Having the inode
allows one to 'find . -inum INODE`.
[tyhicks@canonical.com: Removed some minor not-for-stable parts] Signed-off-by: Tim Gardner <tim.gardner@canonical.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
A malicious count value specified when writing to /dev/ecryptfs may
result in a a very large kernel memory allocation.
This patch peeks at the specified packet payload size, adds that to the
size of the packet headers and compares the result with the write count
value. The resulting maximum memory allocation size is approximately 532
bytes.
Commit ef53d9c5e ("kprobes: improve kretprobe scalability with hashed
locking") introduced a bug where we can potentially leak
kretprobe_instances since we initialize a hlist head after having used
it.
Initialize the hlist head before using it.
Reported by: Jim Keniston <jkenisto@us.ibm.com> Acked-by: Jim Keniston <jkenisto@us.ibm.com> Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Srinivasa D S <srinivasa@in.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
If the provided system call number is equal to __NR_syscalls, the
current check will pass and a function pointer just after the system
call table may be called, since sys_call_table is an array with total
size __NR_syscalls.
Whether or not this is a security bug depends on what the compiler puts
immediately after the system call table. It's likely that this won't do
anything bad because there is an additional NULL check on the syscall
entry, but if there happens to be a non-NULL value immediately after the
system call table, this may result in local privilege escalation.
sym53c8xx_slave_destroy unconditionally assumes that sym53c8xx_slave_alloc has
succesesfully allocated a sym_lcb. This can lead to a NULL pointer dereference
(exposed by commit 4e6c82b).
Add a printk_ratelimited statement expression macro that uses a per-call
ratelimit_state so that multiple subsystems output messages are not
suppressed by a global __ratelimit state.
In the WDM class driver a disconnect event leads to calls to
usb_free_coherent to put back two USB DMA buffers allocated earlier.
The call to usb_free_coherent uses a different size parameter
(desc->wMaxCommand) than the corresponding call to usb_alloc_coherent
(desc->bMaxPacketSize0).
When a disconnect event occurs, this leads to 'bad dma' complaints
from usb core because the USB DMA buffer is being pushed back to the
'buffer-2048' pool from which it has not been allocated.
This patch against the most recent linux-2.6 kernel ensures that the
parameters used by usb_alloc_coherent & usb_free_coherent calls in
cdc-wdm.c match.
For 32-bit architectures using standard jiffies the idletime calculation
in uptime_proc_show will quickly overflow. It takes (2^32 / HZ) seconds
of idle-time, or e.g. 12.45 days with no load on a quad-core with HZ=1000.
Switch to 64-bit calculations.
Cc: Michael Abbott <michael.abbott@diamond.ac.uk> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
expkey_parse() oopses when handling a 0 length export. This is easily
triggerable from usermode by writing 0 bytes into
'/proc/[proc id]/net/rpc/nfsd.fh/channel'.
Cc: "J. Bruce Fields" <bfields@fieldses.org> Cc: Neil Brown <neilb@suse.de> Cc: linux-nfs@vger.kernel.org Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The pool_to and to_pool fields of the global svc_pool_map are freed on
shutdown, but are initialized in nfsd startup only in the
SVC_POOL_PERCPU and SVC_POOL_PERNODE cases.
They *are* initialized to zero on kernel startup. So as long as you use
only SVC_POOL_GLOBAL (the default), this will never be a problem.
You're also OK if you only ever use SVC_POOL_PERCPU or SVC_POOL_PERNODE.
However, the following sequence events leads to a double-free:
1. set SVC_POOL_PERCPU or SVC_POOL_PERNODE
2. start nfsd: both fields are initialized.
3. shutdown nfsd: both fields are freed.
4. set SVC_POOL_GLOBAL
5. start nfsd: the fields are left untouched.
6. shutdown nfsd: now we try to free them again.
Step 4 is actually unnecessary, since (for some bizarre reason), nfsd
automatically resets the pool mode to SVC_POOL_GLOBAL on shutdown.
Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
If ctrls->count is too high the multiplication could overflow and
array_size would be lower than expected. Mauro and Hans Verkuil
suggested that we cap it at 1024. That comes from the maximum
number of controls with lots of room for expantion.
When adding checks for ACPI resource conflicts to many bus drivers,
not enough attention was paid to the error paths, and for several
drivers this causes 0 to be returned on error in some cases. Fix this
by properly returning a non-zero value on every error.
Signed-off-by: Jean Delvare <khali@linux-fr.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
On x86_32 casting the unsigned int result of get_random_int() to
long may result in a negative value. On x86_32 the range of
mmap_rnd() therefore was -255 to 255. The 32bit mode on x86_64
used 0 to 255 as intended.
The bug was introduced by 675a081 ("x86: unify mmap_{32|64}.c")
in January 2008.
Signed-off-by: Ludwig Nussel <ludwig.nussel@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: harvey.harrison@gmail.com Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/201111152246.pAFMklOB028527@wpaz5.hot.corp.google.com Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Some Dell BIOSes have MCFG tables that don't report the entire
MMCONFIG area claimed by the chipset. If we move PCI devices into
that claimed-but-unreported area, they don't work.
This quirk reads the AMD MMCONFIG MSRs and adds PNP0C01 resources as
needed to cover the entire area.
Example problem scenario:
BIOS-e820: 00000000cfec5400 - 00000000d4000000 (reserved)
Fam 10h mmconf [d0000000, dfffffff]
PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xd0000000-0xd3ffffff] (base 0xd0000000)
pnp 00:0c: [mem 0xd0000000-0xd3ffffff]
pci 0000:00:12.0: reg 10: [mem 0xffb00000-0xffb00fff]
pci 0000:00:12.0: no compatible bridge window for [mem 0xffb00000-0xffb00fff]
pci 0000:00:12.0: BAR 0: assigned [mem 0xd4000000-0xd40000ff]
Info about new measurements are cached in the iint for performance. When
the inode is flushed from cache, the associated iint is flushed as well.
Subsequent access to the inode will cause the inode to be re-measured and
will attempt to add a duplicate entry to the measurement list.
This patch frees the duplicate measurement memory, fixing a memory leak.
There is a potential integer overflow in process_msg() that could result
in cross-domain attack.
body = kmalloc(msg->hdr.len + 1, GFP_NOIO | __GFP_HIGH);
When a malicious guest passes 0xffffffff in msg->hdr.len, the subsequent
call to xb_read() would write to a zero-length buffer.
The other end of this connection is always the xenstore backend daemon
so there is no guest (malicious or otherwise) which can do this. The
xenstore daemon is a trusted component in the system.
However this seem like a reasonable robustness improvement so we should
have it.
And Ian when read the API docs found that:
The payload length (len field of the header) is limited to 4096
(XENSTORE_PAYLOAD_MAX) in both directions. If a client exceeds the
limit, its xenstored connection will be immediately killed by
xenstored, which is usually catastrophic from the client's point of
view. Clients (particularly domains, which cannot just reconnect)
should avoid this.
so this patch checks against that instead.
This also avoids a potential integer overflow pointed out by Haogang Chen.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Cc: Haogang Chen <haogangchen@gmail.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
I traced a nasty kexec on panic boot failure to the fact that we had
screaming msi interrupts and we were not disabling the msi messages at
kernel startup. The booting kernel had not enabled those interupts so
was not prepared to handle them.
I can see no reason why we would ever want to leave the msi interrupts
enabled at boot if something else has enabled those interrupts. The pci
spec specifies that msi interrupts should be off by default. Drivers
are expected to enable the msi interrupts if they want to use them. Our
interrupt handling code reprograms the interrupt handlers at boot and
will not be be able to do anything useful with an unexpected interrupt.
This patch applies cleanly all of the way back to 2.6.32 where I noticed
the problem.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
When we fail to erase a PEB, we free the corresponding erase entry object,
but then re-schedule this object if the error code was something like -EAGAIN.
Obviously, it is a bug to use the object after we have freed it.
When an invalid NID is given, get_wcaps() returns zero as the error,
but get_wcaps_type() takes it as the normal value and returns a bogus
AC_WID_AUD_OUT value. This confuses the parser.
With this patch, get_wcaps_type() returns -1 when value 0 is given,
i.e. an invalid NID is passed to get_wcaps().
Commit 503358ae01b70ce6909d19dd01287093f6b6271c ("ext4: avoid divide by
zero when trying to mount a corrupted file system") fixes CVE-2009-4307
by performing a sanity check on s_log_groups_per_flex, since it can be
set to a bogus value by an attacker.
This patch fixes two potential issues in the previous commit.
1) The sanity check might only work on architectures like PowerPC.
On x86, 5 bits are used for the shifting amount. That means, given a
large s_log_groups_per_flex value like 36, groups_per_flex = 1 << 36
is essentially 1 << 4 = 16, rather than 0. This will bypass the check,
leaving s_log_groups_per_flex and groups_per_flex inconsistent.
2) The sanity check relies on undefined behavior, i.e., oversized shift.
A standard-confirming C compiler could rewrite the check in unexpected
ways. Consider the following equivalent form, assuming groups_per_flex
is unsigned for simplicity.
We compile the code snippet using Clang 3.0 and GCC 4.6. Clang will
completely optimize away the check groups_per_flex == 0, leaving the
patched code as vulnerable as the original. GCC keeps the check, but
there is no guarantee that future versions will do the same.
Signed-off-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Commit fa8b18ed didn't prevent the integer overflow and possible
memory corruption. "count" can go negative and bypass the check.
Signed-off-by: Xi Wang <xi.wang@gmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ben Myers <bpm@sgi.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The CPU hotplug notifications sent out by the _cpu_up() and _cpu_down()
functions depend on the value of the 'tasks_frozen' argument passed to them
(which indicates whether tasks have been frozen or not).
(Examples for such CPU hotplug notifications: CPU_ONLINE, CPU_ONLINE_FROZEN,
CPU_DEAD, CPU_DEAD_FROZEN).
Thus, it is essential that while the callbacks for those notifications are
running, the state of the system with respect to the tasks being frozen or
not remains unchanged, *throughout that duration*. Hence there is a need for
synchronizing the CPU hotplug code with the freezer subsystem.
Since the freezer is involved only in the Suspend/Hibernate call paths, this
patch hooks the CPU hotplug code to the suspend/hibernate notifiers
PM_[SUSPEND|HIBERNATE]_PREPARE and PM_POST_[SUSPEND|HIBERNATE] to prevent
the race between CPU hotplug and freezer, thus ensuring that CPU hotplug
notifications will always be run with the state of the system really being
what the notifications indicate, _throughout_ their execution time.
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
At this point if skb->len happens to be 2, the subsequant skb_pull(skb, 4)
call won't work and the skb->len won't be decreased and won't ever reach 0,
resulting in an infinite loop.
With an ASIX 88772 under heavy load, without this patch, rx_fixup() reaches
an infinite loop in less than a minute. With this patch applied,
no infinite loop even after hours of heavy load.
Signed-off-by: Aurelien Jacobs <aurel@gnuage.org> Cc: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
Fix regression introduced by commit 507ca9bc047666 ([PATCH] USB: add
ability for usb-serial drivers to determine if their write urb is
currently being used.) which inverted the logic in write_room so that it
returns zero when the write urb is actually free.
Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Echo vendor and product number of a non usb-storage device to
usb-storage driver's new_id, then plug in the device to host and you
will find following oops msg, the root cause is usb_stor_probe1()
refers invalid id entry if giving a dynamic id, so just disable the
feature.
We were sending data on the stack when uploading firmware, which causes
some machines fits, and is not allowed. Fix this by using the buffer we
already had around for this very purpose.
Reported-by: Wouter M. Koolen <wmkoolen@cwi.nl> Tested-by: Wouter M. Koolen <wmkoolen@cwi.nl> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
On some failures, the country_code field of an acm structure is freed
without freeing the acm structure itself. Elsewhere, operations including
memcpy and kfree are performed on the country_code field. The patch sets
the country_code field to NULL when it is freed, and likewise sets the
country_code_size field to 0.
Signed-off-by: Julia Lawall <julia@diku.dk> Acked-by: Oliver Neukum <oneukum@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The documentation for usbmon is out of date; the usbfs "devices" file
now exists in /sys/kernel/debug/usb rather than /proc/bus/usb. This
patch (as1505) updates the documentation accordingly, and also
mentions that the necessary information can be found by running lsusb.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> CC: Pete Zaitcev <zaitcev@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch fixes a crash in reiserfs_delete_xattrs during umount.
When shrink_dcache_for_umount clears the dcache from
generic_shutdown_super, delayed evictions are forced to disk. If an
evicted inode has extended attributes associated with it, it will
need to walk the xattr tree to locate and remove them.
But since shrink_dcache_for_umount will BUG if it encounters active
dentries, the xattr tree must be released before it's called or it will
crash during every umount.
This patch forces the evictions to occur before generic_shutdown_super
by calling shrink_dcache_sb first. The additional evictions caused
by the removal of each associated xattr file and dir will be automatically
handled as they're added to the LRU list.
CC: reiserfs-devel@vger.kernel.org Signed-off-by: Jeff Mahoney <jeffm@suse.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Adds the device id needed for the USB Ethernet Adapter delivered by
ASUS with their Zenbook.
Signed-off-by: Aurelien Jacobs <aurel@gnuage.org> Acked-by: Grant Grundler <grundler@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
When using a >8bpp framebuffer, offb advertises truecolor, not directcolor,
and doesn't touch the color map even if it has a corresponding access method
for the real hardware.
Thus it needs to set the pseudo-palette with all 3 components of the color,
like other truecolor framebuffers, not with copies of the color index like
a directcolor framebuffer would do.
This went unnoticed for a long time because it's pretty hard to get offb
to kick in with anything but 8bpp (old BootX under MacOS will do that and
qemu does it).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This oops was reported recently:
firmware_loading_store+0xf9/0x17b
dev_attr_store+0x20/0x22
sysfs_write_file+0x101/0x134
vfs_write+0xac/0xf3
sys_write+0x4a/0x6e
system_call_fastpath+0x16/0x1b
The complete backtrace was unfortunately not captured, but details can be found
here:
https://bugzilla.redhat.com/show_bug.cgi?id=769920
The cause is fairly clear.
Its caused by the fact that firmware_loading_store has a case 0 in its
switch statement that reads and writes the fw_priv->fw poniter without the
protection of the fw_lock mutex. since there is a window between the time that
_request_firmware sets fw_priv->fw to NULL and the time the corresponding sysfs
file is unregistered, its possible for a user space application to race in, and
write a zero to the loading file, causing a NULL dereference in
firmware_loading_store. Fix it by extending the protection of the fw_lock mutex
to cover all of the firware_loading_store function.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>