A misconfiguration by the firmware of the U4 PCIe bridge on PowerMac G5
with the U4 bridge (latest generations, may also affect the iMac G5
"iSight") is causing us to re-assign the PCI BARs of the video card,
which can get it out of sync with the firmware, thus breaking offb.
This works around it by fixing up the bridge configuration properly
at boot time. It also fixes a bug where the firmware provides us with
an incorrect set of accessible regions in the device-tree.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Patch f598282f5145036312d90875d0ed5c14b49fd8a7 exposed a problem in
powerpc MSI-X functionality, making network interfaces such as ixgbe
and cxgb3 stop to work when MSI-X is enabled. RX interrupts were not
being generated.
The problem was caused because MSI irq was not being effectively
unmasked after device initialization.
Signed-off-by: Andre Detsch <adetsch@br.ibm.com> Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Since the change of how interrupts are disabled during suspend,
certain PowerBook models started exhibiting various issues during
suspend or resume from sleep.
I finally tracked it down to the code that runs various "platform"
functions (kind of little scripts extracted from the device-tree),
which uses our i2c and PMU drivers expecting interrutps to work,
and at a time where with the new scheme, they have been disabled.
This causes timeouts internally which for some reason results in
the PMU being unable to see the trackpad, among other issues, really
it depends on the machine. Most of the time, we fail to properly adjust
some clocks for suspend/resume so the results are not always
predictable.
This patch fixes it by using IRQF_TIMER for both the PMU and the I2C
interrupts. I prefer doing it this way than moving the call sites since
I really want those platform functions to still be called after all
drivers (and before sysdevs).
We also do a slight cleanup to via-pmu.c driver to make sure the
ADB autopoll mask is handled correctly when doing bus resets
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
# modprobe kvm-amd
kvm_init()
kvm_init_debug() <-- second initialization clobbers kvm's
internal pointers to dentries
kvm_arch_init()
kvm_exit_debug() <-- and frees them
If execution gets to the end of kvm_init(), then the calling module has been
established as the kvm provider. Move the debugfs initialization to the end of
the function, and remove the now-unnecessary call to kvm_exit_debug() from the
error path. That way we avoid trampling on the debugfs entries and freeing
them twice.
Signed-off-by: Darrick J. Wong <djwong@us.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The function iommu_feature_disable is required on system
shutdown to disable the IOMMU but it is marked as __init.
This may result in a panic if the memory is reused. This
patch fixes this bug.
Replenishment of receive buffers is done in the tasklet handling
received frames as well as in a workqueue. When we are in the tasklet
we cannot sleep and thus attempt atomic skb allocations. It is generally
not a big problem if this fails since iwl_rx_allocate is always followed
by a call to iwl_rx_queue_restock which will queue the work to replenish
the buffers at a time when sleeping is allowed.
We thus add the __GFP_NOWARN to the skb allocation in iwl_rx_allocate to
reduce the noise if such an allocation fails while we still have enough
buffers. We do maintain the warning and the error message when we are low
on buffers to communicate to the user that there is a potential problem with
memory availability on system
This addresses issue reported upstream in thread "iwlagn: order 2 page
allocation failures" in
http://thread.gmane.org/gmane.linux.kernel.wireless.general/39187
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Acked-by: Mel Gorman <mel@csn.ul.ie> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
RX handling maintains a few lists that keep track of the RX buffers.
Buffers move from one list to the other as they are used, replenished, and
again made available for usage. In one such instance, when a buffer is used
it enters the "rx_used" list. When buffers are replenished an skb is
attached to the buffer and it is moved to the "rx_free" list. The problem
here is that the buffer is first removed from the "rx_used" list _before_ the
skb is allocated. Thus, if the skb allocation fails this buffer remains
removed from the "rx_used" list and is thus lost for future usage.
Fix this by first allocating the skb before trying to attach it to a list.
We add an additional check to not do this unnecessarily.
Reported-by: Rick Farrington <rickdic@hotmail.com> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
If a system switches back and forth between hot and cold mode,
the MCE code will print a stream of critical kernel messages.
Extend the throttling code to properly notice this, by
only printing the first hot + cold transition and omitting
the rest up to CHECK_INTERVAL (5 minutes).
This way we'll only get a single incident of:
[ 102.356584] CPU0: Temperature above threshold, cpu clock throttled (total events = 1)
[ 102.357000] Disabling lock debugging due to kernel taint
[ 102.369223] CPU0: Temperature/speed normal
Every 5 minutes. The 'total events' count tells the number of cold/hot
transitions detected, should overheating occur after 5 minutes again:
[ 402.357580] CPU0: Temperature above threshold, cpu clock throttled (total events = 24891)
[ 402.358001] CPU0: Temperature/speed normal
[ 450.704142] Machine check events logged
It is possible to have !Anon but SwapBacked pages, and some apps could
create huge number of such pages with MAP_SHARED|MAP_ANONYMOUS. These
pages go into the ANON lru list, and hence shall not be protected: we only
care mapped executable files. Failing to do so may trigger OOM.
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Rik van Riel <riel@redhat.com> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com> Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
On Thu, Aug 13, 2009 at 04:14:58PM +1000, Benjamin Herrenschmidt wrote:
> On Tue, 2009-08-11 at 11:39 +0200, Bastian Blank wrote:
> > This patch just disables this driver on SMP kernels, as it is obviously
> > not supported.
> Why not remove the #error instead ? :-) I don't think it's still
> meaningful, especially since we use the timebase for delays nowadays
> which doesn't depend on the CPU frequency...
Your call. Take this one:
The build of a PowerMac 32bit kernel currently fails with
error: #warning "WARNING, CPUFREQ not recommended on SMP kernels"
Thie patch removes the not longer applicable SMP warning from the
PowerMac cpufreq code.
The NFSv4 renew daemon is shared between all active super blocks that refer
to a particular NFS server, so it is wrong to be shutting it down in
nfs4_kill_super every time a super block is destroyed.
This patch therefore kills nfs4_renewd_prepare_shutdown altogether, and
leaves it up to nfs4_shutdown_client() to also shut down the renew daemon
by means of the existing call to nfs4_kill_renewd().
Commits 29fba38b (nfs41: lease renewal) and fc01cea9 (nfs41: sequence
operation) introduce a couple of put_rpccred() calls on credentials for
which there is no corresponding get_rpccred().
See http://bugzilla.kernel.org/show_bug.cgi?id=14249
RFC 3530 states that when we recieve the error NFS4ERR_RESOURCE, we are not
supposed to bump the sequence number on OPEN, LOCK, LOCKU, CLOSE, etc
operations. The problem is that we map that error into EREMOTEIO in the XDR
layer, and so the NFSv4 middle-layer routines like seqid_mutating_err(),
and nfs_increment_seqid() don't recognise it.
The fix is to defer the mapping until after the middle layers have
processed the error.
Actually pass the NFS_FILE_SYNC option to the server to avoid a
Panic in nfs_direct_write_complete() when a commit fails.
At the end of an nfs write, if the nfs commit fails, all the writes
will be rescheduled. They are supposed to be rescheduled as NFS_FILE_SYNC
writes, but the rpc_task structure is not completely intialized and so
the option is not passed. When the rescheduled writes complete, the
return indicates that they are NFS_UNSTABLE and we try to do another
commit. This leads to a Panic because the commit data structure pointer
was set to null in the initial (failed) commit attempt.
As seen in <http://bugs.debian.org/549002>, nfs4_init_client() can
overrun the source string when copying the client IP address from
nfs_parsed_mount_data::client_address to nfs_client::cl_ipaddr. Since
these are both treated as null-terminated strings elsewhere, the copy
should be done with strlcpy() not memcpy().
Commit 9ef1d4c7c7aca1cd436612b6ca785b726ffb8ed8 ("[NETLINK]: Missing
initializations in dumped data") introduced a typo in
initialization. This patch fixes this.
Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
In mii monitor mode, bond_check_dev_link() calls the the ioctl
handler of slave devices. It stores the ndo_do_ioctl function
pointer to a static (!) ioctl variable and later uses it to call the
handler with the IOCTL macro.
If another thread executes bond_check_dev_link() at the same time
(even with a different bond, which none of the locks prevent), a
race condition occurs. If the two racing slaves have different
drivers, this may result in one driver's ioctl handler being
called with a pointer to a net_device controlled with a different
driver, resulting in unpredictable breakage.
Unless I am overlooking something, the "static" must be a
copy'n'paste error (?).
Signed-off-by: Jiri Bohac <jbohac@suse.cz> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
If two streams are started immediately after one another (such as a
playback and a recording stream), the call to set hw params fails with
EBUSY. This patch makes the call succeed, so playback and recording will
work properly.
I found a deadlock bug in UNIX domain socket, which makes able to DoS
attack against the local machine by non-root users.
How to reproduce:
1. Make a listening AF_UNIX/SOCK_STREAM socket with an abstruct
namespace(*), and shutdown(2) it.
2. Repeat connect(2)ing to the listening socket from the other sockets
until the connection backlog is full-filled.
3. connect(2) takes the CPU forever. If every core is taken, the
system hangs.
PoC code: (Run as many times as cores on SMP machines.)
int main(void)
{
int ret;
int csd;
int lsd;
struct sockaddr_un sun;
/* make an abstruct name address (*) */
memset(&sun, 0, sizeof(sun));
sun.sun_family = PF_UNIX;
sprintf(&sun.sun_path[1], "%d", getpid());
/* connect loop */
alarm(15); /* forcely exit the loop after 15 sec */
for (;;) {
csd = socket(AF_UNIX, SOCK_STREAM, 0);
ret = connect(csd, (struct sockaddr *)&sun, sizeof(sun));
if (-1 == ret) {
perror("connect()");
break;
}
puts("Connection OK");
}
return 0;
}
(*) Make sun_path[0] = 0 to use the abstruct namespace.
If a file-based socket is used, the system doesn't deadlock because
of context switches in the file system layer.
Why this happens:
Error checks between unix_socket_connect() and unix_wait_for_peer() are
inconsistent. The former calls the latter to wait until the backlog is
processed. Despite the latter returns without doing anything when the
socket is shutdown, the former doesn't check the shutdown state and
just retries calling the latter forever.
Patch:
The patch below adds shutdown check into unix_socket_connect(), so
connect(2) to the shutdown socket will return -ECONREFUSED.
Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama.qu@hitachi.com> Signed-off-by: Masanori Yoshida <masanori.yoshida.tv@hitachi.com> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
"b43: Fix PPC crash in rfkill polling on unload" fixed the bug reported
in Bugzilla No. 14181; however, it introduced a new bug. Whenever the
radio switch was turned off, it was necessary to unload and reload
the driver for it to recognize the switch again.
This patch fixes both the original bug in #14181 and the bug introduced by
the previous patch. It must be stated, however, that if there is a BCM4306/3
with an rfkill switch (not yet proven), then the driver will need an
unload/reload cycle to turn the device back on.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The destination keyring specified to request_key() and co. is made available to
the process that instantiates the key (the slave process started by
/sbin/request-key typically). This is passed in the request_key_auth struct as
the dest_keyring member.
keyctl_instantiate_key and keyctl_negate_key() call get_instantiation_keyring()
to get the keyring to attach the newly constructed key to at the end of
instantiation. This may be given a specific keyring into which a link will be
made later, or it may be asked to find the keyring passed to request_key(). In
the former case, it returns a keyring with the refcount incremented by
lookup_user_key(); in the latter case, it returns the keyring from the
request_key_auth struct - and does _not_ increment the refcount.
The latter case will eventually result in an oops when the keyring prematurely
runs out of references and gets destroyed. The effect may take some time to
show up as the key is destroyed lazily.
To fix this, the keyring returned by get_instantiation_keyring() must always
have its refcount incremented, no matter where it comes from.
This can be tested by setting /etc/request-key.conf to:
keyctl add user _display aaaaaaaa @u
while keyctl request2 user test:x test:x @u &&
keyctl list @u;
do
keyctl request2 user test:x test:x @u;
sleep 31;
keyctl list @u;
done
which will oops eventually. Changing the negate line to have @u rather than
%S at the end is important as that forces the latter case by passing a special
keyring ID rather than an actual keyring ID.
Reported-by: Alexander Zangerl <az@bond.edu.au> Signed-off-by: David Howells <dhowells@redhat.com> Tested-by: Alexander Zangerl <az@bond.edu.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
acpi_get_pci_dev() may be called for a non-PCI device, in which case
it should return NULL. However, it assumes that every handle it
finds in the ACPI CA name space, between given device handle and the
PCI root bridge handle, corresponds to a PCI-to-PCI bridge with an
existing secondary bus. For this reason, when it finds a struct
pci_dev object corresponding to one of them, it doesn't check if
its 'subordinate' field is a valid pointer. This obviously leads to
a NULL pointer dereference if acpi_get_pci_dev() is called for a
non-PCI device with a PCI parent which is not a bridge.
To fix this issue make acpi_get_pci_dev() check if pdev->subordinate
is not NULL for every device it finds on the path between the root
bridge and the device it's supposed to get to and return NULL if the
"target" device cannot be found.
http://bugzilla.kernel.org/show_bug.cgi?id=14129
(worked in 2.6.30, regression in 2.6.31)
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Reported-by: Danny Feng <dfeng@redhat.com> Reviewed-by: Alex Chiang <achiang@hp.com> Tested-by: chepioq <chepioq@gmail.com> Signed-off-by: Len Brown <len.brown@intel.com> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@ksplice.com> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Commit 51b563fc93c8cb5bff1d67a0a71c374e4a4ea049 ("arm, cris, mips,
sparc, powerpc, um, xtensa: fix build with bash 4.0") removed a few
CPPFLAGS with vital include paths necessary to build vmlinux.lds
on MIPS, and moved the calculation of the 'jiffies' symbol
directly to vmlinux.lds.S but forgot to change make ifdef/... to
cpp macros.
Signed-off-by: Manuel Lauss <manuel.lauss@gmail.com>
[sam: moved assignment of CPPFLAGS arch/mips/kernel/Makefile] Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Acked-by: Dmitri Vorobiev <dmitri.vorobiev@movial.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
fsnotify_add_mark is supposed to add a mark to the g_list and i_list and to
set the group and inode for the mark. fsnotify_destroy_mark_by_entry uses
the fact that ->group != NULL to know if this group should be destroyed or
if it's already been done.
But fsnotify_add_mark sets the group and inode before it actually adds the
mark to the i_list and g_list. This can result in a race in inotify, it
requires 3 threads.
sys_inotify_add_watch("file") sys_inotify_add_watch("file") sys_inotify_rm_watch([a])
inotify_update_watch()
inotify_new_watch()
inotify_add_to_idr()
^--- returns wd = [a]
inotfiy_update_watch()
inotify_new_watch()
inotify_add_to_idr()
fsnotify_add_mark()
^--- returns wd = [b]
returns to userspace;
inotify_idr_find([a])
^--- gives us the pointer from task 1
fsnotify_add_mark()
^--- this is going to set the mark->group and mark->inode fields, but will
return -EEXIST because of the race with [b].
fsnotify_destroy_mark()
^--- since ->group != NULL we call back
into inotify_freeing_mark() which calls
inotify_remove_from_idr([a])
since fsnotify_add_mark() failed we call:
inotify_remove_from_idr([a]) <------WHOOPS it's not in the idr, this could
have been any entry added later!
The fix is to make sure we don't set mark->group until we are sure the mark is
on the inode and fsnotify_add_mark will return success.
Signed-off-by: Eric Paris <eparis@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
There is an erratum for IOMMU hardware which documents
undefined behavior when forwarding SMI requests from
peripherals and the DTE of that peripheral has a sysmgt
value of 01b. This problem caused weird IO_PAGE_FAULTS in my
case.
This patch implements the suggested workaround for that
erratum into the AMD IOMMU driver. The erratum is
documented with number 63.
fuse_direct_io() has a loop where requests are allocated in each
iteration. if allocation fails, the loop is broken out and follows
into an unconditional fuse_put_request() on that invalid pointer.
Signed-off-by: Anand V. Avati <avati@gluster.com> Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
If TSS we are switching to resides in high memory task switch will fail
since address will be truncated. Windows2k3 does this sometimes when
running with more then 4G
Not a single line of actual code in the function was really
fundamentally correct.
Problems ranged from lack of proper range checking, to removing the last
character written (which admittedly is usually '\n'), to not accepting
hex numbers even though the 'show' routine would show the data in that
format.
This tries to do better.
Acked-by: Michael Buesch <mb@bu3sch.de> Tested-and-acked-by: Jack Steiner <steiner@sgi.com> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Michael Gilbert <michael.s.gilbert@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
We never want to rely on the hvc workqueue to emit output, because the
most interesting output is when the kernel is broken. This will
improve oops/crash/console message for better debugging.
Instead, we force-poll until all output is emitted.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
For new stepping of PCH, the display reference clock
is fully under driver's control. This one trys to setup
all needed reference clock for different outputs. Older
stepping of PCH chipset should be ignoring this.
This fixes output failure issue on newer PCH which requires
driver to take control of reference clock enabling.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: Eric Anholt <eric@anholt.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
FDI M/N calculation hasn't taken the current pipe color depth into account,
but always set as 24bpp. This one checks current pipe color depth setting,
and change FDI M/N calculation a little to use bits_per_pixel first, then
convert to bytes_per_pixel later.
This fixes display corrupt issue on Arrandle LVDS with 1600x900 panel
in 18bpp dual-channel mode.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: Eric Anholt <eric@anholt.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Bruno Prémont and Dunphy, Bill noticed me that NILFS will certainly
hang on ARM-based targets.
I found this was caused by an underflow of dirty pages counter. A
b-tree cache routine was marking page dirty without adjusting page
account information.
This fixes the dirty page accounting leak and resolves the hang on
arm-based targets.
Reported-by: Bruno Prémont <bonbons@linux-vserver.org> Reported-by: Dunphy, Bill <WDunphy@tandbergdata.com> Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Tested-by: Bruno Prémont <bonbons@linux-vserver.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Restoring %ebp after the call to audit_syscall_exit() is not
only unnecessary (because the register didn't get clobbered),
but in the sysenter case wasn't even doing the right thing: It
loaded %ebp from a location below the top of stack (RBP <
ARGOFFSET), i.e. arbitrary kernel data got passed back to user
mode in the register.
Signed-off-by: Jan Beulich <jbeulich@novell.com> Acked-by: Roland McGrath <roland@redhat.com>
LKML-Reference: <4AE5CC4D020000780001BD13@vpn.id2.novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
In try_to_unuse(), swcount is a local copy of *swap_map, including the
SWAP_HAS_CACHE bit; but a wrong comparison against swap_count(*swap_map),
which masks off the SWAP_HAS_CACHE bit, succeeded where it should fail.
That had the effect of resetting the mm from which to start searching
for the next swap page, to an irrelevant mm instead of to an mm in which
this swap page had been found: which may increase search time by ~20%.
But we're used to swapoff being slow, so never noticed the slowdown.
Remove that one spurious use of swap_count(): Bo Liu thought it merely
redundant, Hugh rewrote the description since it was measurably wrong.
Signed-off-by: Bo Liu <bo-liu@hotmail.com> Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Don't pass NULL pointers to fput() in the error handling paths of the NOMMU
do_mmap_pgoff() as it can't handle it.
The following can be used as a test program:
int main() { static long long a[1024 * 1024 * 20] = { 0 }; return a;}
Without the patch, the code oopses in atomic_long_dec_and_test() as called by
fput() after the kernel complains that it can't allocate that big a chunk of
memory. With the patch, the kernel just complains about the allocation size
and then the program segfaults during execve() as execve() can't complete the
allocation of all the new ELF program segments.
Reported-by: Robin Getz <rgetz@blackfin.uclinux.org> Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Robin Getz <rgetz@blackfin.uclinux.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
A few parts of the uv_hub_info structure are initialized
incorrectly.
- n_val is being loaded with m_val.
- gpa_mask is initialized with a bytes instead of an unsigned long.
- Handle the case where none of the alias registers are used.
Lastly I converted the bau over to using the uv_hub_info->m_val
which is the correct value.
Without this patch, booting a large configuration hits a
problem where the upper bits of the gnode affect the pnode
and the bau will not operate.
For some strange reason the netif_running() check
ended up after the actual type change instead of
before, potentially causing all kinds of problems
if the interface is up while changing the type;
one of the problems manifests itself as a warning:
WARNING: at net/mac80211/iface.c:651 ieee80211_teardown_sdata+0xda/0x1a0 [mac80211]()
Hardware name: Aspire one
Pid: 2596, comm: wpa_supplicant Tainted: G W 2.6.31-10-generic #32-Ubuntu
Call Trace:
[] warn_slowpath_common+0x6d/0xa0
[] warn_slowpath_null+0x15/0x20
[] ieee80211_teardown_sdata+0xda/0x1a0 [mac80211]
[] ieee80211_if_change_type+0x4a/0xc0 [mac80211]
[] ieee80211_change_iface+0x61/0xa0 [mac80211]
[] cfg80211_wext_siwmode+0xc7/0x120 [cfg80211]
[] ioctl_standard_call+0x58/0xf0
Cc: Arjan van de Ven <arjan@infradead.org> Signed-off-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
When hostapd injects a frame, e.g. an authentication or association
response, mac80211 looks for a suitable access point virtual interface
to associate the frame with based on its source address. This makes it
possible e.g. to correctly assign sequence numbers to the frames.
A small typo in the ethernet address comparison statement caused a
failure to find a suitable ap interface. Sequence numbers on such
frames where therefore left unassigned causing some clients
(especially windows-based 11b/g clients) to reject them and fail to
authenticate or associate with the access point. This patch fixes the
typo in the address comparison statement.
Signed-off-by: Björn Smedman <bjorn.smedman@venatech.se> Reviewed-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The reason of this bug is that the error handling code of
cifs_get_tcp_session()
calls kfree() when corresponding kmalloc() failed.
(The kmalloc() is called by extract_hostname().)
Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
On SMP guests, reads from the ring might bypass used index reads. This
causes guest crashes because host writes to used index to signal ring
data readiness. Fix this by inserting rmb before used ring reads.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
In the case where cpuidle_idle_call() returns before changing state due to
a need_resched(), it was returning with IRQs disabled.
The idle path assumes that the platform specific idle code returns with
interrupts enabled (although this too is undocumented AFAICT) and on ARM
we have a WARN_ON(!(irqs_disabled()) when returning from the idle loop, so
the user-visible effects were only a warning since interrupts were
eventually re-enabled later.
On x86, this same problem exists, but there is no WARN_ON() to detect it.
As on ARM, the interrupts are eventually re-enabled, so I'm not sure of
any actual bugs triggered by this. It's primarily a
correctness/consistency fix.
This patch ensures IRQs are (re)enabled before returning.
Reported-by: Hemanth V <hemanthv@ti.com> Signed-off-by: Kevin Hilman <khilman@deeprootsystems.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Len Brown <len.brown@intel.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Tested-by: Martin Michlmayr <tbm@cyrius.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
On a 64-bit kernel, skb->tail is an offset, not a pointer. The libertas
usb driver passes it to usb_fill_bulk_urb() anyway, causing interesting
crashes. Fix that by using skb->data instead.
This highlights a problem with usb_fill_bulk_urb(). It doesn't notice
when dma_map_single() fails and return the error to its caller as it
should. In fact it _can't_ currently return the error, since it returns
void.
So this problem was showing up only at unmap time, after we'd already
suffered memory corruption by doing DMA to a bogus address.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch presents a fix for the autosuspend feature implementation in
sierra usb serial driver for function sierra_send_setup(). Because it
is possible to call sierra_send_setup() before sierra_open() or after
sierra_close() we added a get/put interface activity to assure that the
usb control can happen even when the device is autosuspended.
Signed-off-by: Elina Pasheva <epasheva@sierrawireless.com> Tested-by: Matthew Safar <msafar@sierrawireless.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
We create a dummy struct kernel_param on the stack for parsing each
array element, but we didn't initialize the flags word. This matters
for arrays of type "bool", where the flag indicates if it really is
an array of bools or unsigned int (old-style).
e180a6b7759a "param: fix charp parameters set via sysfs" fixed the case
where charp parameters written via sysfs were freed, leaving drivers
accessing random memory.
Unfortunately, storing a flag in the kparam struct was a bad idea: it's
rodata so setting it causes an oops on some archs. But that's not all:
1) module_param_array() on charp doesn't work reliably, since we use an
uninitialized temporary struct kernel_param.
2) there's a fundamental race if a module uses this parameter and then
it's changed: they will still access the old, freed, memory.
The simplest fix (ie. for 2.6.32) is to never free the memory. This
prevents all these problems, at cost of a memory leak. In practice, there
are only 18 places where a charp is writable via sysfs, and all are
root-only writable.
In this patch:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=16dc42e018c2868211b4928f20a957c0c216126c
the check was added for another driver to already claim the same device
on the same bus. But the returned error code was wrong: to modprobe, the
-EEXIST means that _this_ driver is already installed. It therefore
doesn't produce the needed error message when _another_ driver is trying
to register for the same device. Returning -EBUSY fixes the problem.
I am not confident that I can find and fix all cases where a sector number
may be truncated. For now, avoid data loss by refusing to mount HFS+
volumes with more than 2^32 sectors (2TB).
[akpm@linux-foundation.org: fix 32 and 64-bit issues] Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Cc: Eric Sesterhenn <snakebyte@gmx.de> Cc: Roman Zippel <zippel@linux-m68k.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Maximum chunk size is 512kB, there won't ever be need to use 4GB chunk size,
so the number can be 32-bit. This fixes compiler failure on 32-bit systems
with large block devices.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Reviewed-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
If we are creating snapshot with memory-stored exception store, fail if
the user didn't specify chunk size. Zero chunk size would probably crash
a lot of places in the rest of snapshot code.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Reviewed-by: Jonathan Brassow <jbrassow@redhat.com> Reviewed-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch locks the snapshot when returning status. It fixes a race
when it could return an invalid number of free chunks if someone
was simultaneously modifying it.
Multiple instances of dec_pending() can run concurrently so a lock is
needed when it saves the first error code.
I have never experienced actual problem without locking and just found
this during code inspection while implementing the barrier support
patch for request-based dm.
This patch adds the locking.
I've done compile, boot and basic I/O testings.
drivers/md/dm-log-userspace-base.c: In function `userspace_ctr':
drivers/md/dm-log-userspace-base.c:159: warning: cast from pointer to integer of different size
Cc: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Avoid a race causing corruption when snapshots of the same origin have
different chunk sizes by sorting the internal list of snapshots by chunk
size, largest first.
https://bugzilla.redhat.com/show_bug.cgi?id=182659
For example, let's have two snapshots with different chunk sizes. The
first snapshot (1) has small chunk size and the second snapshot (2) has
large chunk size. Let's have chunks A, B, C in these snapshots:
snapshot1: ====A==== ====B====
snapshot2: ==========C==========
(Chunk size is a power of 2. Chunks are aligned.)
A write to the origin at a position within A and C comes along. It
triggers reallocation of A, then reallocation of C and links them
together using A as the 'primary' exception.
Then another write to the origin comes along at a position within B and
C. It creates pending exception for B. C already has a reallocation in
progress and it already has a primary exception (A), so nothing is done
to it: B and C are not linked.
If the reallocation of B finishes before the reallocation of C, because
there is no link with the pending exception for C it does not know to
wait for it and, the second write is dispatched to the origin and causes
data corruption in the chunk C in snapshot2.
To avoid this situation, we maintain snapshots sorted in descending
order of chunk size. This leads to a guaranteed ordering on the links
between the pending exceptions and avoids the problem explained above -
both A and B now get linked to C.
Apparently some of Toshiba Protege M300 identify themselves as
"Portable PC" in DMI so we need to add that to the DMI table as
well. We need DMI data so we can automatically lower Synaptics
reporting rate from 80 to 40 pps to avoid over-taxing their
keyboard controllers.
After sucessfully registering the misc device the driver iounmaps the
hardware registers and kfree's the device data structure. Ouch !
This was introduced with commit e42311d75 (riowatchdog: Convert to
pure OF driver) and went unnoticed for more than a year :)
Return success instead of dropping into the error cleanup code path.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Many years ago when this driver was written, it had a use, but these
days it's nothing but trouble and distributions should not enable it
in any situation.
Pretty much every console device a sparc machine could see has a
bonafide real driver, making the PROM console hack unnecessary.
If any new device shows up, we should write a driver instead of
depending upon this crutch to save us. We've been able to take care
of this even when no chip documentation exists (sunxvr500, sunxvr2500)
so there are no excuses.
Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Disables 64-bit DMA for _all_ boards, unlike 2.6.32 which adds a
whitelist. (The whitelist function requires a fairly large patch
that touches unrelated code.)
Doesn't revert the DMI part as other backported patches might need
the exported symbol.
The requeue_pi path doesn't use unqueue_me() (and the racy lock_ptr ==
NULL test) nor does it use the wake_list of futex_wake() which where
the reason for commit 41890f2 (futex: Handle spurious wake up)
See debugging discussing on LKML Message-ID: <4AD4080C.20703@us.ibm.com>
The changes in this fix to the wait_requeue_pi path were considered to
be a likely unecessary, but harmless safety net. But it turns out that
due to the fact that for unknown $@#!*( reasons EWOULDBLOCK is defined
as EAGAIN we built an endless loop in the code path which returns
correctly EWOULDBLOCK.
Spurious wakeups in wait_requeue_pi code path are unlikely so we do
the easy solution and return EWOULDBLOCK^WEAGAIN to user space and let
it deal with the spurious wakeup.
Cc: Darren Hart <dvhltc@us.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: John Stultz <johnstul@linux.vnet.ibm.com> Cc: Dinakar Guniguntala <dino@in.ibm.com>
LKML-Reference: <4AE23C74.1090502@us.ibm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
If userspace tries to perform a requeue_pi on a non-requeue_pi waiter,
it will find the futex_q->requeue_pi_key to be NULL and OOPS.
Check for NULL in match_futex() instead of doing explicit NULL pointer
checks on all call sites. While match_futex(NULL, NULL) returning
false is a little odd, it's still correct as we expect valid key
references.
Signed-off-by: Darren Hart <dvhltc@us.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@elte.hu> CC: Eric Dumazet <eric.dumazet@gmail.com> CC: Dinakar Guniguntala <dino@in.ibm.com> CC: John Stultz <johnstul@us.ibm.com>
LKML-Reference: <4AD60687.10306@us.ibm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The futex code does not handle spurious wake up in futex_wait and
futex_wait_requeue_pi.
The code assumes that any wake up which was not caused by futex_wake /
requeue or by a timeout was caused by a signal wake up and returns one
of the syscall restart error codes.
In case of a spurious wake up the signal delivery code which deals
with the restart error codes is not invoked and we return that error
code to user space. That causes applications which actually check the
return codes to fail. Blaise reported that on preempt-rt a python test
program run into a exception trap. -rt exposed that due to a built in
spurious wake up accelerator :)
Solve this by checking signal_pending(current) in the wake up path and
handle the spurious wake up case w/o returning to user space.
Reported-by: Blaise Gassend <blaise@willowgarage.com> Debugged-by: Darren Hart <dvhltc@us.ibm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
If the Linux kernel detects an C1E capable AMD processor (K8 RevF and
higher), it will access a certain MSR on every attempt to go to halt.
Explicitly handle this read and return 0 to let KVM run a Linux guest
with the native AMD host CPU propagated to the guest.
Signed-off-by: Andre Przywara <andre.przywara@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
prereset doesn't bring link online if hardreset is about to happen and
nv_hardreset() may skip if conditions are not right so softreset may
be entered with non-working link status if the system firmware didn't
bring it up before entering OS code which can happen during resume.
This patch makes nv_hardreset() to bring up the link if it's skipping
reset.
This bug was reported by frodone@gmail.com in the following bug entry.
Commit 842faa6c1a1d6faddf3377948e5cf214812c6c90 fixed error handling
during attach by not committing detected device class to dev->class
while attaching a new device. However, this change missed the PMP
class check in the configuration loop causing a new PMP device to go
through ata_dev_configure() as if it were an ATA or ATAPI device.
As PMP device doesn't have a regular IDENTIFY data, this makes
ata_dev_configure() tries to configure a PMP device using an invalid
data. For the most part, it wasn't too harmful and went unnoticed but
this ends up clearing dev->flags which may have ATA_DFLAG_AN set by
sata_pmp_attach(). This means that SATA_PMP_FEAT_NOTIFY ends up being
disabled on PMPs and on PMPs which honor the flag breaks hotplug
support.
This problem was discovered and reported by Ethan Hsiao.
When an internal command fails, it should be failed directly without
invoking EH. In the original implemetation, this was accomplished by
letting internal command bypass failure handling in ata_qc_complete().
However, later changes added post-successful-completion handling to
that code path and the success path is no longer adequate as internal
command failure path. One of the visible problems is that internal
command failure due to timeout or other freeze conditions would
spuriously trigger WARN_ON_ONCE() in the success path.
This patch updates failure path such that internal command failure
handling is contained there.
on some system when acpi are enabled, acpi clears some BAR for some
devices without reason, and kernel will need to allocate devices for
them. It then apparently hits some undocumented resource conflict,
resulting in non-working devices.
Try to increase alignment to get more safe range for unassigned devices.
Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Note that the failure window is quite small and I could only
reliably reproduce the defect by inserting a small delay
in pipe_rdwr_open(). For example:
Although the defect was observed in pipe_rdwr_open(), I think it
makes sense to replicate the change through all the pipe_*_open()
functions.
The core of the change is to verify that inode->i_pipe has not
been released before attempting to manipulate it. If inode->i_pipe
is no longer present, return ENOENT to indicate so.
The comment about potentially using atomic_t for i_pipe->readers
and i_pipe->writers has also been removed because it is no longer
relevant in this context. The inode->i_mutex lock must be used so
that inode->i_pipe can be dealt with correctly.
The locking logic in this function is extremely subtle, and it broke
when we started doing potentially concurrent 'flush_to_ldisc()' calls in
commit e043e42bdb66885b3ac10d27a01ccb9972e2b0a3 ("pty: avoid forcing
'low_latency' tty flag").
The code in flush_to_ldisc() used to set 'tty->buf.head' to NULL, with
the intention that this would then cause any other concurrent calls to
not do anything (locking note: we have to drop the buf.lock over the
call to ->receive_buf that can block, which is why we can have
concurrency here at all in the first place).
It also used to set the TTY_FLUSHING bit, which would then cause any
concurrent 'tty_buffer_flush()' to not free all the tty buffers and
clear 'tty->buf.tail'. And with 'buf.head' being NULL, and 'buf.tail'
being non-NULL, new data would never touch 'buf.head'.
Does that sound a bit too subtle? It was. If another concurrent call to
'flush_to_ldisc()' were to come in, the NULL buf.head would indeed cause
it to not process the buffer list, but it would still clear TTY_FLUSHING
afterwards, making the buffer protection against 'tty_buffer_flush()' no
longer work.
So this clears it all up. We depend purely on TTY_FLUSHING for handling
re-entrancy, and stop playing games with the buffer list entirely. In
fact, the buffer list handling is now robust enough that we could
probably stop doing the whole "protect against 'tty_buffer_flush()'"
thing entirely.
However, Alan also points out that we would probably be better off
simplifying the locking even further, and just take the tty ldisc_mutex
around all the buffer flushing calls. That seems like a good idea, but
in the meantime this is a conceptually minimal fix (with the patch
itself being bigger than required just to clean the code up and make it
readable).
When receiving data frames, we can send them only to
the interface they belong to based on transmitting
station (this doesn't work for probe requests). Also,
don't try to handle other frames for AP_VLAN at all
since those interface should only receive data.
Additionally, the transmit side must check that the
station we're sending a frame to is actually on the
interface we're transmitting on, and not transmit
packets to functions that live on other interfaces,
so validate that as well.
Another bug fix is needed in sta_info.c where in the
VLAN case when adding/removing stations we overwrite
the sdata variable we still need.
Signed-off-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>