Xiaotian Feng triggered a list corruption in the clock events list on
CPU hotplug and debugged the root cause.
If a CPU registers more than one per cpu clock event device, then only
the active clock event device is removed on CPU_DEAD. The unused
devices are kept in the clock events device list.
On CPU up the clock event devices are registered again, which means
that we list_add an already enqueued list_head. That results in list
corruption.
Resolve this by removing all devices which are associated to the dead
CPU on CPU_DEAD.
cpuid(0xd, ..); // find out what features FP/SSE/.. etc are supported
xsetbv(); // enable the features known to OS
cpuid(0xd, ..); // find out the size of the context for features enabled
Depending on what features get enabled in xsetbv(), value of the
cpuid.eax=0xd.ecx=0.ebx changes correspondingly (representing the
size of the context that is enabled).
As we don't have volatile keyword for native_cpuid(), gcc 4.1.2
optimizes away the second cpuid and the kernel continues to use
the cpuid information obtained before xsetbv(), ultimately leading to kernel
crash on processors supporting more state than the legacy FP/SSE.
Add "volatile" for native_cpuid().
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
LKML-Reference: <1261009542.2745.55.camel@sbs-t61.sc.intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
dio transfer always resets mdata->page_order to zero. It breaks
high-order pages previously allocated for non-dio transfer.
This patches adds reserved_page_order to st_buffer structure to save
page order for non-dio transfer.
http://bugzilla.kernel.org/show_bug.cgi?id=14563
When enlarge_buffer() allocates 524288 from 0, st uses six-order page
allocation. So mdata->page_order is 6 and frp_seg is 2.
After that, if st uses dio, sgl_map_user_pages() sets
mdata->page_order to 0 for st_do_scsi(). After that, when we call
normalize_buffer(), it frees only free frp_seg * PAGE_SIZE (2 * 4096)
though we should free frp_seg * PAGE_SIZE << 6 (2 * 4096 << 6). So we
see buffer_size is set to 516096 (524288 - 8192).
Reported-by: Joachim Breuer <linux-kernel@jmbreuer.net> Tested-by: Joachim Breuer <linux-kernel@jmbreuer.net> Acked-by: Kai Makisara <kai.makisara@kolumbus.fi> Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: James Bottomley <James.Bottomley@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
After commits c82f63e411f1b58427c103bd95af2863b1c96dd1 (PCI: check saved
state before restore) and 4b77b0a2ba27d64f58f16d8d4d48d8319dda36ff (PCI:
Clear saved_state after the state has been restored) PCI drivers are
prevented from restoring the device standard configuration registers
twice in a row. These changes introduced a regression on ipr EEH
recovery.
The ipr device driver saves the PCI state only during the device probe
and restores it on ipr_reset_restore_cfg_space() during IOA resets. This
behavior is causing the EEH recovery to fail after the second error
detected, since the registers are not being restored.
One possible solution would be saving the registers after restoring
them. The problem with this approach is that while recovering from an
EEH error if pci_save_state() results in an EEH error, the adapter/slot
will be reset, and end up back in ipr_reset_restore_cfg_space(), but it
won't have a valid saved state to restore, so pci_restore_state() will
fail.
The following patch introduces a workaround for this problem, hacking
around the PCI API by setting pdev->state_saved = true before we do the
restore. It fixes the EEH regression and prevents that we hit another
EEH error during EEH recovery.
[jejb: fix is a hack ... Jesse and Rafael will fix properly] Signed-off-by: Kleber Sacilotto de Souza <klebers@linux.vnet.ibm.com> Acked-by: Brian King <brking@linux.vnet.ibm.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: James Bottomley <James.Bottomley@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
It is quite legitimate for CPUs to be numbered sparsely, meaning
that it possible for an online CPU to have a number which is
greater than the total count of possible CPUs.
Currently find_get_context() has a sanity check on the cpu
number where it checks it against num_possible_cpus(). This
test can fail for a legitimate cpu number if the
cpu_possible_mask is sparsely populated.
This fixes the problem by checking the CPU number against
nr_cpumask_bits instead, since that is the appropriate check to
ensure that the cpu number is same to pass to cpu_isset()
subsequently.
Reported-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org> Tested-by: Michael Neuling <mikey@neuling.org> Acked-by: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <20091215084032.GA18661@brick.ozlabs.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
We are seeing a bug when booting w/ iommu=pt with current upstream
(bisect blames 19943b0e30b05d42e494ae6fef78156ebc8c637e "intel-iommu:
Unify hardware and software passthrough support).
The issue is specific to this loop during identity map initialization
of each device:
domain_context_mapping_one(si_domain, ..., CONTEXT_TT_PASS_THROUGH)
...
/* Skip top levels of page tables for
* iommu which has less agaw than default.
*/
for (agaw = domain->agaw; agaw != iommu->agaw; agaw--) {
pgd = phys_to_virt(dma_pte_addr(pgd));
if (!dma_pte_present(pgd)) { <------ failing here
spin_unlock_irqrestore(&iommu->lock, flags);
return -ENOMEM;
}
This box has 2 iommu's in it. The catchall iommu has MGAW == 48, and
SAGAW == 4. The other iommu has MGAW == 39, SAGAW == 2.
The device that's failing the above pgd test is the only device connected
to the non-catchall iommu, which has a smaller address width than the
domain default. This test is not necessary since the context is in PT
mode and the ASR is ignored.
Thanks to Don Dutile for discovering and debugging this one.
Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The hotplug notifier will call find_domain() to see if the device in
question has been assigned an IOMMU domain. However, this should never
be called for devices with a "dummy" domain, such as graphics devices
when intel_iommu=igfx_off is set and the corresponding IOMMU isn't even
initialised. If you do that, it'll oops as it dereferences the (-1)
pointer.
The notifier function should check iommu_no_mapping() for the
device before doing anything else.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Some HP BIOSes report an RMRR region (a region which needs a 1:1 mapping
in the IOMMU for a given device) which has an end address lower than its
start address. Detect that and warn, rather than triggering the
BUG() in dma_pte_clear_range().
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The BIOS errors where an IOMMU is reported either at zero or a bogus
address are causing problems even when the IOMMU is disabled -- because
interrupt remapping uses the same hardware. Ensure that the checks get
applied for the interrupt remapping initialisation too.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Ever since jffs2_garbage_collect_metadata() was first half-written in
February 2001, it's been broken on architectures where 'char' is signed.
When garbage collecting a symlink with target length above 127, the payload
length would end up negative, causing interesting and bad things to happen.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Make sure that any otherwise uninitialised fields of usvc are zero.
This has been obvserved to cause a problem whereby the port of
fwmark services may end up as a non-zero value which causes
scheduling of a destination server to fail for persisitent services.
As observed by Deon van der Merwe <dvdm@truteq.co.za>.
This fix suggested by Julian Anastasov <ja@ssi.bg>.
For good measure also zero udest.
Cc: Deon van der Merwe <dvdm@truteq.co.za> Acked-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Simon Horman <horms@verge.net.au> Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
When do_nonlinear_fault() realizes that the page table must have been
corrupted for it to have been called, it does print_bad_pte() and returns
... VM_FAULT_OOM, which is hard to understand.
It made some sense when I did it for 2.6.15, when do_page_fault() just
killed the current process; but nowadays it lets the OOM killer decide who
to kill - so page table corruption in one process would be liable to kill
another.
Change it to return VM_FAULT_SIGBUS instead: that doesn't guarantee that
the process will be killed, but is good enough for such a rare
abnormality, accompanied as it is by the "BUG: Bad page map" message.
And recent HWPOISON work has copied that code into do_swap_page(), when it
finds an impossible swap entry: fix that to VM_FAULT_SIGBUS too.
Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Cc: Izik Eidus <ieidus@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Nick Piggin <npiggin@suse.de> Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Rik van Riel <riel@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com> Cc: Andi Kleen <andi@firstfloor.org> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Reviewed-by: Wu Fengguang <fengguang.wu@intel.com> Reviewed-by: Minchan Kim <minchan.kim@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
In disable sequence, all output ports on PCH have to be disabled
before PCH transcoder, but LVDS port was left always enabled. This
one fixes that by disable LVDS port properly during pipe disable
process, and resolved stability issue seen on Ironlake. Also move
panel fitting disable time just after pipe disable to align with
the spec.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: Eric Anholt <eric@anholt.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
In commit d2d9f2324, the guard for a valid video mode was removed. This
caused the regression:
kernel crash during kms graphic boot on Intel GM4500 platform
https://bugzilla.redhat.com/show_bug.cgi?id=540218
This patches changes the logic slightly not to rely on a coupled
variable, but to just check whether the video_modes is valid before
dereferencing.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Zhenyu Wang <zhenyu.z.wang@intel.com>
[ickle: Actually reference the correct bug report] Acked-by: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: Eric Anholt <eric@anholt.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
On platforms where bios handles the thermal monitor interrupt,
APIC_LVTTHMR on each logical CPU is programmed to generate a SMI and OS
can't touch it.
Unfortunately AP bringup sequence using INIT-SIPI-SIPI clear all
the LVT entries except the mask bit. Essentially this results in
all LVT entries including the thermal monitoring interrupt set to masked
(clearing the bios programmed value for APIC_LVTTHMR).
And this leads to kernel take over the thermal monitoring interrupt
on AP's but not on BSP (leaving the bios programmed value only on BSP).
As a result of this, we have seen system hangs when the thermal
monitoring interrupt is generated.
Fix this by reading the initial value of thermal LVT entry on BSP
and if bios has taken over the control, then program the same value
on all AP's and leave the thermal monitoring interrupt control
on all the logical cpu's to the bios.
Signed-off-by: Yong Wang <yong.y.wang@intel.com> Reviewed-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Arjan van de Ven <arjan@infradead.org>
LKML-Reference: <20091110013824.GA24940@ywang-moblin2.bj.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch converts bcm63xx_enet to uset get_sset_count
like the other drivers do.
Signed-off-by: Florian Fainelli <ffainelli@freebox.fr> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
When ext3_write_begin fails after allocating some blocks or
generic_perform_write fails to copy data to write, we truncate blocks already
instantiated beyond i_size. Although these blocks were never inside i_size, we
have to truncate pagecache of these blocks so that corresponding buffers get
unmapped. Otherwise subsequent __block_prepare_write (called because we are
retrying the write) will find the buffers mapped, not call ->get_block, and
thus the page will be backed by already freed blocks leading to filesystem and
data corruption.
Reported-by: James Y Knight <foom@fuhm.net> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
I received some bug reports about userspace programs having problems
because after RTM_NEWLINK was received they could not immeidate
access files under /proc/sys/net/ because they had not been
registered yet.
The problem was trivailly fixed by moving the userspace
notification from rtnetlink_event to the end of register_netdevice.
Signed-off-by: Eric W. Biederman <ebiederm@aristanetworks.com> Cc: David Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Currently, ARB_DISABLE is a NOP on all of the recent Intel platforms.
For such platforms, reduce contention on c3_lock by skipping the fake
ARB_DISABLE.
The cpu model id on one laptop is 14. If we disable ARB_DISABLE on this box,
the box can't be booted correctly. But if we still enable ARB_DISABLE on this
box, the box can be booted correctly.
So we still use the ARB_DISABLE for the cpu which mode id is less than 0x0f.
http://bugzilla.kernel.org/show_bug.cgi?id=14700
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com> Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Tejun Heo [Wed, 9 Dec 2009 23:43:16 +0000 (08:43 +0900)]
vmalloc: conditionalize build of pcpu_get_vm_areas()
No matching upstream commit as it was resolved differently there.
pcpu_get_vm_areas() is used only when dynamic percpu allocator is used
by the architecture. In 2.6.32, ia64 doesn't use dynamic percpu
allocator and has a macro which makes pcpu_get_vm_areas() buggy via
local/global variable aliasing and triggers compile warning.
The problem is fixed in upstream and ia64 uses dynamic percpu
allocators, so the only left issue is inclusion of unnecessary code
and compile warning on ia64 on 2.6.32.
Don't build pcpu_get_vm_areas() if legacy percpu allocator is in use.
The light sensor disable brightness key and
/sys/class/backlight/ control. There was a lot of report
from users who didn't understand why they couldn't change their
brightness, including:
Now the light sensor is disabled, and if the user want to enable
it, the level should be ok.
The funny thing is that comments where ok, not code.
Cc: stable@kernel.org Cc: Thomas Renninger <trenn@suse.de> Cc: Peter KĂĽppers <peter-mailbox@web.de> Cc: Michael Franzl <michaelfranzl@gmx.at> Cc: Ian Turner <vectro@vectro.org> Signed-off-by: Corentin Chary <corentincj@iksaif.net> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Added new BIOS versions for following netbooks: Acer 1410, Gateway LT31,
Packard Bell DOA150. As the Gateway LT31 machines have different register
values for setting and checking the off-state, the "cmd_off" variable has
been splitted up to "cmd_off" and "chk_off".
Signed-off-by: Peter Feuerer <peter@piie.net> Cc: Borislav Petkov <petkovbb@gmail.com> Cc: Andreas Mohr <andi@lisas.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Regression caused in 2.6.23 and then despite repeated requests never fixed
or dealt with (Petr promised to sort it in 2008 but seems to have
forgotten).
Enough is enough - remove the problem line that was added. If it upsets
someone they've had two years to deal with it and at the very least it'll
rattle their cage and wake them up.
Add PCI .shutdown method so that we can disable the device during
shutdown or reboot. Without this, the reboot doesn't work well on
some platforms.
This fixes http://bugzilla.intellinuxwireless.org/show_bug.cgi?id=2124
Tested-by: pablo <pablolm2005@gmail.com> Signed-off-by: Zhu Yi <yi.zhu@intel.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Since the rfkill rework in 2.6.31, the driver is always resuming with
the radios disabled.
Change thinkpad-acpi to ask the firmware to resume with the radios in
the last state. This fixes the Bluetooth and WWAN rfkill switches.
Note that it means we respect the firmware's oddities. Should the
user toggle the hardware rfkill switch on and off, it might cause the
radios to resume enabled.
UWB is an unknown quantity since it has nowhere the same level of
firmware support (no control over state storage in NVRAM, for
example), and might need further fixing. Testers welcome.
This change fixes a regression from 2.6.30.
Reported-by: Jerone Young <jerone.young@canonical.com> Reported-by: Ian Molton <ian.molton@collabora.co.uk> Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br> Tested-by: Ian Molton <ian.molton@collabora.co.uk> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
According to a report, the R50e wants EC-based brightness control,
even if it uses an Intel GPU. The current driver default was reported
to not work at all.
This bug can be worked around by the "brightness_mode=3" module
parameter.
Change the default of the R50e and R51 2xxx models (which use the same
EC firmware, 1V) to TPACPI_BRGHT_Q_EC, but keep TPACPI_BRGHT_Q_ASK set
for now, as I'd like to get more reports.
Reported-by: Ferenc Wagner <wferi@niif.hu> Tested-by: Ferenc Wagner <wferi@niif.hu> Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br> Cc: stable@kernel.org Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
A memory cgroup has a memory.memsw.usage_in_bytes file. It shows the sum
of the usage of pages and swapents in the cgroup. Presently the root
cgroup's memsw.usage_in_bytes shows the wrong value - the number of
swapents are not added.
In current vblank-wait implementation, if we turn off VGA output,
drm_wait_vblank will still wait on the disabled pipe until timeout,
because vblank on the pipe is assumed be enabled. This would cause
slow system response on some system such as moblin.
This patch resolve the issue by adding a drm helper function
drm_vblank_off which explicitly clear vblank_enabled[crtc], wake up
any waiting queue and save last vblank counter before turning off
crtc. It also slightly change drm_vblank_get to ensure that we will
will return immediately if trying to wait on a disabled pipe.
Signed-off-by: Li Peng <peng.li@intel.com> Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
[anholt: hand-applied for conflicts with overlay changes] Signed-off-by: Eric Anholt <eric@anholt.net> Cc: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Not only ps_sdata but also IEEE80211_CONF_PS is to be considered
before restoring PS in scan_ps_disable(). For instance, when ps_sdata
is set but CONF_PS is not set just because the dynamic timer is still
running, a sw scan leads to setting of CONF_PS in scan_ps_disable
instead of restarting the dynamic PS timer.
Also for the above case, a null data frame is to be sent after
returning to operating channel which was not happening with the
current implementation. This patch fixes this too.
Signed-off-by: Vivek Natarajan <vnatarajan@atheros.com> Reviewed-by: Kalle Valo <kalle.valo@nokia.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch fixes a bug in ath9k's tx status check, which
caused mac80211 to consider regularly transmitted unicast frames
as un-acked.
When checking the ts_status field for errors, it needs to be masked
with ATH9K_TXERR_FILT, because this field also contains other fields
like ATH9K_TX_ACKED.
Without this patch, AP mode is pretty much unusable, as hostapd
checks the ACK status for the frames that it injects.
Signed-off-by: Felix Fietkau <nbd@openwrt.org> Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Atheros single stream AR9285 and AR9271 have half the PCU TX FIFO
buffer size of that of dual stream devices. Dual stream devices
have a max PCU TX FIFO size of 8 KB while single stream devices
have 4 KB. Single stream devices have an issue though and require
hardware only to use half of the amount of its capable PCU TX FIFO
size, 2 KB and this requires a change in software.
Technically a change would not have been required (except for frame
burst considerations of 128 bytes) if these devices would have been
able to use the full 4 KB of the PCU TX FIFO size but our systems
engineers recommend 2 KB to be used only. We enforce this through
software by reducing the max frame triggger level to 2 KB.
Fixing the max frame trigger level should then have a few benefits:
* The PER will now be adjusted as designed for underruns when the
max trigger level is reached. This should help alleviate the
bus as the rate control algorithm chooses a slower rate which
should ensure frames are transmitted properly under high system
bus load.
* The poll we use on our TX queues should now trigger and work
as designed for single stream devices. The hardware passes
data from each TX queue on the PCU TX FIFO queue respecting each
queue's priority. The new trigger level ensures this seeding of
the PCU TX FIFO queue occurs as designed which could mean avoiding
false resets and actually reseting hw correctly when a TX queue
is indeed stuck.
* Some undocumented / unsupported behaviour could have been triggered
when the max trigger level level was being set to 4 KB on single
stream devices. Its not clear what this issue was to me yet.
Cc: Kyungwan Nam <kyungwan.nam@atheros.com> Cc: Bennyam Malavazi <bennyam.malavazi@atheros.com> Cc: Stephen Chen <stephen.chen@atheros.com> Cc: Shan Palanisamy <shan.palanisamy@atheros.com> Cc: Paul Shaw <paul.shaw@atheros.com> Signed-off-by: Vasanthakumar Thiagarajan <vasanth@atheros.com> Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
When mac80211 was telling us to go into Powersave we listened
and immediately turned RX off. This meant hardware would not
see the ACKs from the AP we're associated with and hardware
we'd end up retransmiting the null data frame in a loop
helplessly.
Fix this by keeping track of the transmitted nullfunc frames
and only when we are sure the AP has sent back an ACK do we
go ahead and shut RX off.
Signed-off-by: Vasanthakumar Thiagarajan <vasanth@atheros.com> Signed-off-by: Vivek Natarajan <Vivek.Natarajan@atheros.com> Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
For some reason the export of the event print format to userspace
uses '#fmt' which breaks if the format string is anything but a plain
string, for example if it is built with macros then the macro names
are exported instead of their contents.
Use
"\"%s\"", fmt
instead of
"%s", #fmt
to export the string and not the way it is built.
For example, in net/mac80211/driver-trace.h for the trace event drv_start
there is:
TP_printk(
LOCAL_PR_FMT, LOCAL_PR_ARG
)
Which use to produce:
print fmt: LOCAL_PR_FMT, REC->wiphy_name
Now produces:
print fmt: "%s", REC->wiphy_name
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
LKML-Reference: <20091113224009.GB23942@elte.hu> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
For PPC architecture with PHY Revision < 3, a read of the register
B43_MMIO_HWENABLED_LO will cause a CPU fault unless b43legacy_status()
returns a value of 2 (B43legacy_STAT_STARTED); however, one finds that
the driver is unable to associate after resuming from hibernation unless
this routine returns 1. To satisfy both conditions, the routine is rewritten
to return TRUE whenever b43legacy_status() returns a value < 2.
This patch fixes the second problem listed in the postings for Red Hat
Bugzilla #538523.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
"ARCH" can be just about anything, so we shouldn't end up
with UTS_MACHINE of "sparc" in a 64-bit kernel build just
because someone set the personality using 'sparc32' or
similar. CONFIG_SPARC64 drives the compilation and
therefore provides the definitive value, not "ARCH".
First, the softirq range check forgets to subtract STACK_BIAS
before comparing with %sp. Next, on failure the wrong label
is jumped to, resulting in a bogus stack being loaded.
Reported-by: Igor Kovalenko <igor.v.kovalenko@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
When we are trying to see if a range property entry applies
to a given address, we are overly strict about the type.
We should only allow I/O ranges for I/O addresses, and only allow
CONFIG space ranges for CONFIG space address.
However for MEM ranges, they come in 32-bit and 64-bit flavors.
And a lack of an exact match is OK if the range is 32-bit and
the address is 64-bit. We can assign a 64-bit address properly
into a 32-bit parent range just fine.
So allow it.
Reported-by: Patrick Finnegan <pat@computer-refuge.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
About 50% of shutdowns of b44 Ethernet adapter ends by kernel panic
with kernels compiled with stack-protector.
Checking b44_magic_pattern() return values, one call of
b44_magic_pattern() returns 127. It means, that set_bit(128, pmask)
was called on line 1509. It means that bit 0 of 17th byte of pmask was
overwritten. But pmask has only 16 bytes. Stack corruption happens.
It seems that set_bit() on line 1509 always writes one bit off.
The fix does not only solve the stack corruption, but also makes Wake
On LAN working on my onboard B44 on Asus A7V-333X mainboard.
It seems that this problem affects all kernel versions since commit 725ad800 ([PATCH] b44: add wol for old nic) on 2006-06-20.
Signed-off-by: Stanislav Brabec <sbrabec@suse.cz> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
When a large packet gets reassembled by ip_defrag(), the head skb
accounts for all the fragments in skb->truesize. If this packet is
refragmented again, skb->truesize is not re-adjusted to reflect only
the head size since its not owned by a socket. If the head fragment
then gets recycled and reused for another received fragment, it might
exceed the defragmentation limits due to its large truesize value.
skb_recycle_check() explicitly checks for linear skbs, so any recycled
skb should reflect its true size in skb->truesize. Change ip_fragment()
to also adjust the truesize value of skbs not owned by a socket.
Reported-and-tested-by: Ben Menchaca <ben@bigfootnetworks.com> Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch fixes a problem in the TCP connection timeout calculation.
Currently, timeout decisions are made on the basis of the current
tcp_time_stamp and retrans_stamp, which is usually set at the first
retransmission.
However, if the retransmission fails in tcp_retransmit_skb(),
retrans_stamp is not updated and remains zero. This leads to wrong
decisions in retransmits_timed_out() if tcp_time_stamp is larger than
the specified timeout, which is very likely.
In this case, the TCP connection dies after the first attempted
(and unsuccessful) retransmission.
With this patch, tcp_skb_cb->when is used instead, when retrans_stamp
is not available.
This bug has been introduced together with retransmits_timed_out() in
2.6.32, as the number of retransmissions has been used for timeout
decisions before. The corresponding commit was 6fa12c85031485dff38ce550c24f10da23b0adaa (Revert Backoff [v3]:
Calculate TCP's connection close threshold as a time value.).
Thanks to Ilpo Järvinen for code suggestions and Frederic Leroy for
testing.
Reported-by: Frederic Leroy <fredo@starox.org> Signed-off-by: Damian Lukowski <damian@tvk.rwth-aachen.de> Acked-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Fix checking of the currently programmed UDMA mode.
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com> Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The "wipe key" message is used to wipe the volume key from memory
temporarily, for example when suspending to RAM.
But the initialisation vector in ESSIV mode is calculated from the
hashed volume key, so the wipe message should wipe this IV key too and
reinitialise it when the volume key is reinstated.
This patch adds an IV wipe method called from a wipe message callback.
ESSIV is then reinitialised using the init function added by the
last patch.
Signed-off-by: Milan Broz <mbroz@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Under some special conditions the snapshot hash_size is calculated as zero.
This patch instead sets a minimum value of 64, the same as for the
pending exception table.
rounddown_pow_of_two(0) is an undefined operation (it expands to shift
by -1). init_exception_table with an argument of 0 would fail with -ENOMEM.
The way to trigger the problem is to create a snapshot with a chunk size
that is larger than the origin device.
Fix a reported deadlock if there are still unprocessed multipath events
on a device that is being removed.
_hash_lock is held during dev_remove while trying to send the
outstanding events. Sending the events requests the _hash_lock
again in dm_copy_name_and_uuid.
This patch introduces a separate lock around regions that modify the
link to the hash table (dm_set_mdptr) or the name or uuid so that
dm_copy_name_and_uuid no longer needs _hash_lock.
Additionally, dm_copy_name_and_uuid can only be called if md exists
so we can drop the dm_get() and dm_put() which can lead to a BUG()
while md is being freed.
Define private structures for IV so it's easy to add further attributes
in a following patch which fixes the way key material is wiped from
memory. Also move ESSIV destructor and remove unnecessary 'status'
operation.
There are no functional changes in this patch.
Signed-off-by: Milan Broz <mbroz@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Take snapshot lock only for STATUSTYPE_INFO, not STATUSTYPE_TABLE.
Commit 4c6fff445d7aa753957856278d4d93bcad6e2c14
(dm-snapshot-lock-snapshot-while-supplying-status.patch)
introduced this use of the lock, but userspace applications using
libdevmapper have been found to request STATUSTYPE_TABLE while the device
is suspended and the lock is already held, leading to deadlock. Since
the lock is not necessary in this case, don't try to take it.
Currently if the balloon driver is unable to increase the guest's
reservation it assumes the failure was due to reaching its full
allocation, gives up on the ballooning operation and records the limit
it reached as the "hard limit". The driver will not try again until
the target is set again (even to the same value).
However it is possible that ballooning has in fact failed due to
memory pressure in the host and therefore it is desirable to keep
attempting to reach the target in case memory becomes available. The
most likely scenario is that some guests are ballooning down while
others are ballooning up and therefore there is temporary memory
pressure while things stabilise. You would not expect a well behaved
toolstack to ask a domain to balloon to more than its allocation nor
would you expect it to deliberately over-commit memory by setting
balloon targets which exceed the total host memory.
This patch drops the concept of a hard limit and causes the balloon
driver to retry increasing the reservation on a timer in the same
manner as when decreasing the reservation.
Also if we partially succeed in increasing the reservation
(i.e. receive less pages than we asked for) then we may as well keep
those pages rather than returning them to Xen.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
I have observed cases where the implicit stop_machine_destroy() done by
stop_machine() hangs while destroying the workqueues, specifically in
kthread_stop(). This seems to be because timer ticks are not restarted
until after stop_machine() returns.
Fortunately stop_machine provides a facility to pre-create/post-destroy
the workqueues so use this to ensure that workqueues are only destroyed
after everything is really up and running again.
I only actually observed this failure with 2.6.30. It seems that newer
kernels are somehow more robust against doing kthread_stop() without timer
interrupts (I tried some backports of some likely looking candidates but
did not track down the commit which added this robustness). However this
change seems like a reasonable belt&braces thing to do.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
If Xen wants to return to a 32b usermode with sysret it must use the
right form. When using VCGF_in_syscall to trigger this, it looks at
the code segment and does a 32b sysret if it is FLAT_USER_CS32.
However, this is different from __USER32_CS, so it fails to return
properly if we use the normal Linux segment.
So avoid the whole mess by dropping VCGF_in_syscall and simply use
plain iret to return to usermode.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
On resume irq_info[*].evtchn is reset to 0 since event channel mappings
are not preserved over suspend/resume. The other contents of irq_info
is preserved to allow rebind_evtchn_irq() to function.
However when a device resumes it will try to unbind from the
previous IRQ (e.g. blkfront goes blkfront_resume() -> blkif_free() ->
unbind_from_irqhandler() -> unbind_from_irq()). This will fail due to the
check for VALID_EVTCHN in unbind_from_irq() and the IRQ is leaked. The
device will then continue to resume and allocate a new IRQ, eventually
leading to find_unbound_irq() panic()ing.
Fix this by changing unbind_from_irq() to handle teardown of interrupts
which have type!=IRQT_UNBOUND but are not currently bound to a specific
event channel.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The existing error handling has a few issues:
- If freeze_processes() fails it exits with shutting_down = SHUTDOWN_SUSPEND.
- If dpm_suspend_noirq() fails it exits without resuming xenbus.
- If stop_machine() fails it exits without resuming xenbus or calling
dpm_resume_end().
- xs_suspend()/xs_resume() and dpm_suspend_noirq()/dpm_resume_noirq() were not
nested in the obvious way.
Fix by ensuring each failure case goto's the correct label. Treat a failure of
stop_machine() as a cancelled suspend in order to follow the correct resume
path.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
tick_resume() is never called on secondary processors. Presumably this
is because they are offlined for suspend on native and so this is
normally taken care of in the CPU onlining path. Under Xen we keep all
CPUs online over a suspend.
This patch papers over the issue for me but I will investigate a more
generic, less hacky, way of doing to the same.
tick_suspend is also only called on the boot CPU which I presume should
be fixed too.
Signed-off-by: Ian Campbell <Ian.Campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
dpm_resume_noirq() takes a mutex, so it can't be called from a no-interrupt
context. Don't call it from within the stop-machine function, but just
afterwards, since we're resuming anyway, regardless of what happened.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The commit "xen: re-register runstate area earlier on resume" caused us
to never try and setup the runstate area for secondary CPUs. Ensure that
we do this...
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Otherwise the timer is disabled by dpm_suspend_noirq() which in turn prevents
correct operation of stop_machine on multi-processor systems and breaks
suspend.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
pvops kernels >= 2.6.30 can currently only be saved and restored once. The
second attempt to save results in:
ERROR Internal error: Frame# in pfn-to-mfn frame list is not in pseudophys
ERROR Internal error: entry 0: p2m_frame_list[0] is 0xf2c2c2c2, max 0x120000
ERROR Internal error: Failed to map/save the p2m frame list
xen: split construction of p2m mfn tables from registration
Build the p2m_mfn_list_list early with the rest of the p2m table, but
register it later when the real shared_info structure is in place.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
The unforeseen side-effect of this change was to cause the mfn list list to not
be rebuilt on resume. Prior to this change it would have been rebuilt via
xen_post_suspend() -> xen_setup_shared_info() -> xen_setup_mfn_list_list().
Fix by explicitly calling xen_build_mfn_list_list() from xen_post_suspend().
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This is necessary to ensure the runstate area is available to
xen_sched_clock before any calls to printk which will require it in
order to provide a timestamp.
I chose to pull the xen_setup_runstate_info out of xen_time_init into
the caller in order to maintain parity with calling
xen_setup_runstate_info separately from calling xen_time_resume.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
drm/ttm fails to build on MIPS because "struct page" is not known:
| In file included from drivers/gpu/drm/ttm/ttm_memory.c:28:
| include/drm/ttm/ttm_memory.h:154: warning: 'struct page' declared inside parameter list
| include/drm/ttm/ttm_memory.h:154: warning: its scope is only this definition or declaration, which is probably not what you want
| include/drm/ttm/ttm_memory.h:156: warning: 'struct page' declared inside parameter list
| drivers/gpu/drm/ttm/ttm_memory.c:540: error: conflicting types for 'ttm_mem_global_alloc_page'
| include/drm/ttm/ttm_memory.h:154: error: previous declaration of 'ttm_mem_global_alloc_page' was here
| drivers/gpu/drm/ttm/ttm_memory.c:561: error: conflicting types for 'ttm_mem_global_free_page'
| include/drm/ttm/ttm_memory.h:156: error: previous declaration of 'ttm_mem_global_free_page' was here
Signed-off-by: Martin Michlmayr <tbm@cyrius.com> Acked-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
e821ea70f3b4873b50056a1e0f74befed1014c09 introduced a bug by copying
some 64-bit originated code as-is to be used by both 32 and 64-bit
but this code contains a 64-bit ony "cmpdi" instruction.
This changes it to cmpwi, which is fine since VRSAVE can only contains
a 32-bit value anyway.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
In commit 0512a9a8e277a9de2820211eef964473b714ae65, we unilaterally zero the
"pwm invert" bit in the fan behavior configuration register. On my PowerBook
G4, this results in the fans going to full speed at low temperature and
shutting off at high temperature because the pwm invert bit is supposed to be
set.
Therefore, record the pwm invert bit at driver load time, and write the bit
into the fan behavior control register. This restores correct behavior on my
PBG4 and should work around the bit being set to the wrong value after
suspend/resume (which is what the original patch was trying to fix). It also
fixes a minor omission where the pwm invert bit correction is NOT performed
when switching into automatic mode.
Signed-off-by: Darrick J. Wong <djwong@us.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Windfarm SMU control is explicitly missing support for a second CPU pump in G5 PowerMacs. Such machines actually exist (specifically Quads with a second pump), so this patch adds detection for it.
Most callers of pmd_none_or_clear_bad() check whether the target page is
in a hugepage or not, but walk_page_range() do not check it. So if we
read /proc/pid/pagemap for the hugepage on x86 machine, the hugepage
memory is leaked as shown below. This patch fixes it.
Details
=======
My test program (leak_pagemap) works as follows:
- creat() and mmap() a file on hugetlbfs (file size is 200MB == 100 hugepages,)
- read()/write() something on it,
- call page-types with option -p (walk around the page tables),
- munmap() and unlink() the file on hugetlbfs
Most callers of pmd_none_or_clear_bad() check whether the target page is
in a hugepage or not, but mincore() and walk_page_range() do not check it.
So if we use mincore() on a hugepage on x86 machine, the hugepage memory
is leaked as shown below. This patch fixes it by extending mincore()
system call to support hugepages.
Details
=======
My test program (leak_mincore) works as follows:
- creat() and mmap() a file on hugetlbfs (file size is 200MB == 100 hugepages,)
- read()/write() something on it,
- call mincore() for first ten pages and printf() the values of *vec
- munmap() and unlink() the file on hugetlbfs
Return values in *vec from mincore() are set to 0, while the hugepage
should be in memory, and 1 hugepage is still accounted as used while
there is no file on hugetlbfs.
There are different bits used to convey the setting of the rfkill
switch to the driver. The current driver only supports one of these
possibilities. These changes were derived from the latest version
of the vendor driver.
This patch fixes the regression noted in kernel Bugzilla #14743.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Reported-and-tested-by: Antti Kaijanmäki <antti@kaijanmaki.net> Tested-by: Hin-Tak Leung <hintak.leung@gmail.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Since sometimes mac80211 queues up a scan request
to only act on it later, it must be allowed to
(internally) cancel a not-yet-running scan, e.g.
when the interface is taken down. This condition
was missing since we always checked only the
local->scanning variable which isn't yet set in
that situation.
Reported-by: Luis R. Rodriguez <mcgrof@gmail.com> Signed-off-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The patch ("mac80211: Use correct sign for mesh active path
refresh.") was actually a bug. Reverted it and improved the
explanation of how mesh path refresh works.
Signed-off-by: Javier Cardona <javier@cozybit.com> Signed-off-by: Andrey Yurovsky <andrey@cozybit.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Paths to mesh portals were being timed out immediately after each use in
intermediate forwarding nodes. mppath->exp_time is set to the expiration time
so assigning it to jiffies was marking the path as expired.
Signed-off-by: Javier Cardona <javier@cozybit.com> Signed-off-by: Andrey Yurovsky <andrey@cozybit.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
On a 32-bit machine, BIT() macro does not give the required
bit value if the bit is mroe than 31. In ieee802_11_parse_elems_crc(),
BIT() is suppossed to get the bit value more than 31 (42 (id of ERP_INFO_IE),
37 (CHANNEL_SWITCH_IE), (42), 32 (POWER_CONSTRAINT_IE), 45 (HT_CAP_IE),
61 (HT_INFO_IE)). As we do not get the required bit value for the above
IEs, crc over these IEs are never calculated, so any dynamic change in these
IEs after the association is not really handled on 32-bit platforms.
This patch fixes this issue.
Signed-off-by: Vasanthakumar Thiagarajan <vasanth@atheros.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Do not read IIR in serial8250_start_tx when UART_BUG_TXEN
Reading the IIR clears some oustanding interrupts so it is not safe.
Instead, simply transmit immediately if the buffer is empty without
regard to IIR.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Jiri Kosina <jkosina@suse.cz> Cc: Alan Cox <alan@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch (as1310) works around a race in dev_driver_string(). If
the device is unbound while the function is running, dev->driver might
become NULL after we test it and before we dereference it.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Cc: Oliver Neukum <oliver@neukum.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Setting fops and private data outside of the mutex at debugfs file
creation introduces a race where the files can be opened with the wrong
file operations and private data. It is easy to trigger with a process
waiting on file creation notification.
devpts_get_tty() assumes that the inode passed in is associated with a valid
pty. But if the only reference to the pty is via a bind-mount, the inode
passed to devpts_get_tty() while valid, would refer to a pty that no longer
exists.
With a lot of debug effort, Grzegorz Nosek developed a small program (see
below) to reproduce a crash on recent kernels. This crash is a regression
introduced by the commit: