Sebastian Ott [Thu, 25 Dec 2008 12:39:13 +0000 (13:39 +0100)]
[S390] cio: introduce cio_commit_config
To change the configuration of a subchannel we alter the modifiable
bits of the subchannel's schib field and issue a modify subchannel.
There can be the case that not all changes were applied -or worse-
quietly overwritten by the hardware. With the next store subchannel
we obtain the current state of the hardware but lose our target
configuration.
With this patch we introduce a subchannel_config structure which
contains the target subchannel configuration. Additionally the msch
wrapper cio_modify is replaced with cio_commit_config which
copies the desired changes to a temporary schib. msch is then
called with the temporary schib. This schib is only written back
to the subchannel if all changes were applied.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Sebastian Ott [Thu, 25 Dec 2008 12:39:12 +0000 (13:39 +0100)]
[S390] cio: introduce cio_update_schib
There is the chance that we get condition code 0 for a stsch but
the resulting schib is not vaild. In the current code there are
2 cases:
* we do a check for validity of the schib after stsch, but at this
time we have already stored the invaild schib in the subchannel
structure. This may lead to problems.
* we don't do a check for validity, which is not that good either.
The patch addresses both issues by introducing the stsch wrapper
cio_update_schib which performs stsch on a local schib. This schib
is only written back to the subchannel if it's valid.
side note: For some functions (chp_events) the return codes are
different now (-ENXIO vs -ENODEV) but this shouldn't do harm
since the caller doesn't check for _specific_ errors.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cornelia Huck [Thu, 25 Dec 2008 12:39:09 +0000 (13:39 +0100)]
[S390] cio: Dont fail probe for I/O subchannels.
If we fail the probe for an I/O subchannel, we won't be able
to unregister it again since there are no sch_event()
callbacks for unbound subchannels. Just succeed the probe in
any case and schedule unregistering the subchannel.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cornelia Huck [Thu, 25 Dec 2008 12:39:08 +0000 (13:39 +0100)]
[S390] cio: Only register ccw_device for registered subchannel.
There is a race between io_subchannel_register() and
io_subchannel_sch_event() which may cause a subchannel to be
unregistered because it is no longer operational before
io_subchannel_register() had run. We need to check whether the
subchannel is still registered before the ccw device can be
registered and just bail out if it is not.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cornelia Huck [Thu, 25 Dec 2008 12:39:07 +0000 (13:39 +0100)]
[S390] cio: Fix I/O subchannel refcounting.
Subchannel refcounting was incorrect in some places, especially
a refcount was missing when ccw_device_call_sch_unregister()
was called and the refcount was not correctly switched after
moving devices.
Fix this by establishing the following rules:
- The ccw_device obtains a reference on its parent subchannel
when dev.parent is set and gives it up in its release
function. This is needed because we need a parent reference
for correct refcounting even before the ccw device is (if at
all) registered.
- When calling device_move(), obtain a reference on the new
subchannel before moving the ccw device and give up the
reference on the old parent after moving. This brings the
refcount in line with the first rule.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cornelia Huck [Thu, 25 Dec 2008 12:39:06 +0000 (13:39 +0100)]
[S390] cio: Fix reference counting for online/offline.
The current code attempts to get an extra reference count
for online devices by doing a get_device() in ccw_device_online()
and a put_device() in ccw_device_done(). However, this
- incorrectly obtains an extra reference for disconnected
devices becoming available again (since they are already
online)
- needs special checks for css_init_done in order to handle
the console device
- is not obvious and
- may incorretly drop a reference count in ccw_device_done() if
that function is called after path verification for a device
that just became not operational.
So let's just get the reference in ccw_device_set_online() and
drop it in ccw_device_set_offline(). (Unfortunately, we still
need the special case in io_subchannel_probe().)
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
A kernel compile on 31 bit gives the following warnings in ptrace.c:
arch/s390/kernel/ptrace.c: In function 'peek_user':
arch/s390/kernel/ptrace.c:207: warning: unused variable 'dummy'
arch/s390/kernel/ptrace.c: In function 'poke_user':
arch/s390/kernel/ptrace.c:315: warning: unused variable 'dummy'
Getting rid of the dummy variables removes the warnings.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
[S390] s390/hvc_console: z/VM IUCV hypervisor console support
This patch introduces a new hypervisor console (HVC) back-end that provides
terminal access over the z/VM inter-user communication vehicle (IUCV).
The z/VM IUCV communication is independent of the regular tcp/ip network
and allows access even if there is no network connection between two
z/VM guest virtual machines.
The z/VM IUCV hypervisor console back-end helps the user to access a
z/VM guest virtual machine that lacks of network connectivity; and thus,
provides a "full-screen" terminal alternative to 3215/3270 terminal sessions.
Use the hvc_iucv=[0..8] kernel boot parameter to specify the number of
HVC terminals using a z/VM IUCV back-end.
A recent version of the s390-tools package is required to establish a
terminal connection to a z/VM IUCV hypervisor console back-end.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
For a large number of I/O requests the values were shifted binary.
The shift was not transparent for the user because the shift value
was not displayed. To make this interface more human readable the
values are shifted decimal and the scale factor is displayed.
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Christof Schmitt [Thu, 25 Dec 2008 12:38:50 +0000 (13:38 +0100)]
[S390] zfcp: Report microcode level through service level interface
Register zfcp with the new /proc/service_level interface to report the
FCP microcode level. When the adapter goes offline or a channel path
disappears, zfcp unregisters, since the microcode version might change
and zfcp does not know about it.
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add a new proc interface /proc/service_levels that allows any code
to report a relevant service level, e.g. the microcode level of
devices, the service level of the hypervisor, etc.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Jan Glauber [Thu, 25 Dec 2008 12:38:48 +0000 (13:38 +0100)]
[S390] qdio: fix error reporting for hipersockets
Hipersocket connections can encounter temporary busy conditions.
In case of the busy bit set we retry the SIGA operation immediatelly.
If the busy condition still persists after 100 ms we fail and report
the error to the upper layer. The second stage retry logic is removed.
In case of ongoing busy conditions the upper layer needs to reset the
connection.
The reporting of a SIGA error is now done synchronously to allow the
network driver to requeue the buffers. Also no error trace is created
for the temporary SIGA errors so the error message view is not flooded.
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
- Use automatic acknowledgement of incoming buffers in QEBSM mode
- Move ACK for non-QEBSM mode always to the newest buffer to prevent
a race with qdio_stop_polling
- Remove the polling spinlock, the upper layer drivers return new buffers
in the same code path and could not run in parallel
- Don't flood the error log in case of no-target-buffer-empty
- In handle_inbound we check if we would overwrite an ACK'ed buffer, if so
advance the pointer to the oldest ACK'ed buffer so we don't overwrite an
empty buffer in qdio_stop_polling
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Jan Glauber [Thu, 25 Dec 2008 12:38:46 +0000 (13:38 +0100)]
[S390] qdio: rework debug feature logging
- make qdio_trace a per device view
- remove s390dbf exceptions
- remove CONFIG_QDIO_DEBUG, not needed anymore if we check for the level
before calling sprintf
- use snprintf for dbf entries
- add start markers to see if the dbf view wrapped
- add a global error view for all queues
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Jan Glauber [Thu, 25 Dec 2008 12:38:45 +0000 (13:38 +0100)]
[S390] qdio: fix compile warning under 31 bit
The QEBSM instructions are only available for CONFIG_64BIT, they are not
used under 31 bit. Make compiler happy about the false positive:
drivers/s390/cio/qdio_main.c: In function ?qdio_inbound_q_done?:
drivers/s390/cio/qdio_main.c:532: warning: ?state? may be used uninitialized in this function
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Jan Glauber [Thu, 25 Dec 2008 12:38:44 +0000 (13:38 +0100)]
[S390] qdio: add eqbs/sqbs instruction counters
Add counters for the eqbs and sqbs instructions that indicate how often
we issued the instructions and how often the instructions returned with
less buffers than specified.
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Jan Glauber [Thu, 25 Dec 2008 12:38:43 +0000 (13:38 +0100)]
[S390] qdio: fix qeth port count detection
qeth needs to get the port count information before
qdio has allocated a page for the chsc operation.
Extend qdio_get_ssqd_desc() to store the data in the
specified structure.
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Christian Maaser [Thu, 25 Dec 2008 12:38:42 +0000 (13:38 +0100)]
[S390] ap: Minor code beautification.
Changed some symbol names for a better and clearer code.
Signed-off-by: Christian Maaser <cmaaser@de.ibm.com> Signed-off-by: Felix Beck <beckf@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The work function dispatched with schedule_work() can be run twice
on different cpus because run_workqueue clears the WORK_STRUCT_PENDING
bit and then executes the function. Another cpu can call schedule_work()
again and run the work function a second time before the first call
is completed. This patch serialized the etr and stp work function with
a mutex.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Heiko Carstens [Thu, 25 Dec 2008 12:38:37 +0000 (13:38 +0100)]
[S390] convert etr/stp to stop_machine interface
This converts the etr and stp code to the new stop_machine interface
which allows to synchronize all cpus without allocating any memory.
This way we get rid of the only reason why we haven't converted s390
to the generic IPI interface yet.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
arch_setup_additional_pages currently gets two arguments, the binary
format descripton and an indication if the process uses an executable
stack or not. The second argument is not used by anybody, it could
be removed without replacement.
What actually does make sense is to pass an indication if the process
uses the elf interpreter or not. The glibc code will not use anything
from the vdso if the process does not use the dynamic linker, so for
statically linked binaries the architecture backend can choose not
to map the vdso.
Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The vmcp driver uses the session->mutex for concurrent access of the data
structures. Therefore, the BKL in vmcp_open does not protect against any
other function in the driver.
The BLK in vmcp_open would protect concurrent access to the module init
but all necessary steps ave finished before misc_register is called.
We can safely remove the lock_kernel from vcmp.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Heiko Carstens [Thu, 25 Dec 2008 12:37:59 +0000 (13:37 +0100)]
[S390] cpu topology: dont destroy cpu sets on topology change
Call rebuild_sched_domains instead of arch_reinit_sched_domains if
cpu topology changes. This leaves cpu sets alone which otherwise would
be destroyed.
If and how it makes sense to define cpu sets on a virtualized
architecture is another question.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Al Viro [Thu, 25 Dec 2008 12:37:58 +0000 (13:37 +0100)]
[S390] audit: get s390 ret_from_fork in sync with other architectures
On s390 we have ret_from_fork jump not to the "do all work we
normally do on return from syscall" as on x86, ppc, etc., but to the
"do all such work except audit". Historical reasons - the codepath
triggered when we have AUDIT process flag set is separated from the
normall one and they converge at sysc_return, which is the common
part of post-syscall work. And does not include calling audit_syscall_exit() -
that's done in the end of sysc_tracesys path, just before that path jumps
to sysc_return.
IOW, the child returning from fork()/clone()/vfork() doesn't
call audit_syscall_exit() at all, so no matter what we do with its
audit context, we are not going to see the audit entry.
The fix is simple: have ret_from_fork go to the point just past
the call of sys_.... in the 'we have AUDIT flag set' path. There we
have (64bit variant; for 31bit the situation is the same):
sysc_tracenogo:
tm __TI_flags+7(%r9),(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT)
jz sysc_return
la %r2,SP_PTREGS(%r15) # load pt_regs
larl %r14,sysc_return # return point is sysc_return
jg do_syscall_trace_exit
which is precisely what we need - check the flag, bugger off to sysc_return
if not set, otherwise call do_syscall_trace_exit() and bugger off to
sysc_return. r9 has just been properly set by ret_from_fork itself,
so we are fine.
Tested on s390x, seems to work fine. WARNING: it's been about
16 years since my last contact with 3X0 assembler[1], so additional
review would be very welcome. I don't think I've managed to screw it
up, but...
[1] that *was* in another country and besides, the box is dead...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Heiko Carstens [Thu, 25 Dec 2008 12:37:57 +0000 (13:37 +0100)]
[S390] cpu topology: fix cpu_core_map initialization
Common code doesn't call arch_update_cpu_topology() anymore on
cpu hotplug. But our architecture backend relied on that in order to
update the cpu_core_map. For machines without cpu topology support
this leads uninitialized cpu_core_maps for later on added cpus.
To solve this just initialize the maps with cpu_possible_map, since
that will be always valid for machines without topology support.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
and are queued up for v2.6.29. This shows that the facility is still not
tested well enough to release into a stable kernel - disable it for now and
reactivate in .29. In .29 the hardware-branch-tracer will use the DS/BTS
facilities too - hopefully resulting in better code.
Kyle McMartin [Tue, 23 Dec 2008 13:44:30 +0000 (08:44 -0500)]
parisc: disable UP-optimized flush_tlb_mm
flush_tlb_mm's "optimized" uniprocessor case of allocating a new
context for userspace is exposing a race where we can suddely return
to a syscall with the protection id and space id out of sync, trapping
on the next userspace access.
Harry Ciao [Tue, 23 Dec 2008 21:57:16 +0000 (13:57 -0800)]
edac: fix edac core deadlock when removing a device
When deleting an edac device, we have to wait for its edac_dev.work to be
completed before deleting the whole edac_dev structure. Since we have no
idea which work in current edac_poller's workqueue is the work we are
conerned about, we wait for all work in the edac_poller's workqueue to be
proceseed. This is done via flush_cpu_workqueue() which inserts a
wq_barrier into the tail of the workqueue and then sleeping on the
completion of this wq_barrier. The edac_poller will wake up sleepers when
it is found.
EDAC core creates only one kernel worker thread, edac_poller, to run the
works of all current edac devices. They share the same callback function
of edac_device_workq_function(), which would grab the mutex of
device_ctls_mutex first before it checks the device. This is exactly
where edac_poller and rmmod would have a great chance to deadlock.
In below call trace of rmmod > ... >
edac_device_del_device >
edac_device_workq_teardown > flush_workqueue > flush_cpu_workqueue,
device_ctls_mutex would have already been grabbed by
edac_device_del_device(). So, on one hand rmmod would sleep on the
completion of a wq_barrier, holding device_ctls_mutex; on the other hand
edac_poller would be blocked on the same mutex when it's running any one
of works of existing edac evices(Note, this edac_dev.work is likely to be
totally irrelevant to the one that is being removed right now)and never
would have a chance to run the work of above wq_barrier to wake rmmod up.
edac_device_workq_teardown() should not be called within the critical
region of device_ctls_mutex. Just like is done in edac_pci_del_device()
and edac_mc_del_mc(), where edac_pci_workq_teardown() and
edac_mc_workq_teardown() are called after related mutex are released.
Moreover, an edac_dev.work should check first if it is being removed. If
this is the case, then it should bail out immediately. Since not all of
existing edac devices are to be removed, this "shutting flag" should be
contained to edac device being removed. The current edac_dev.op_state can
be used to serve this purpose.
The original deadlock problem and the solution have been witnessed and
tested on actual hardware. Without the solution, rmmod an edac driver
would result in below deadlock:
Evgeniy Polyakov [Tue, 23 Dec 2008 21:57:12 +0000 (13:57 -0800)]
w1: fix slave selection on big-endian systems
During test of the w1-gpio driver i found that in "w1.c:679
w1_slave_found()" the device id is converted to little-endian with
"cpu_to_le64()", but its not converted back to cpu format in "w1_io.c:293
w1_reset_select_slave()".
Based on a patch created by Andreas Hummel.
[akpm@linux-foundation.org: remove unneeded cast] Reported-by: Andreas Hummel <andi_hummel@gmx.de> Signed-off-by: Evgeniy Polyakov <zbr@ioremap.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
V4L/DVB (9920): em28xx: fix NULL pointer dereference in call to VIDIOC_INT_RESET command
Fix a NULL pointer dereference that would occur if the video decoder tied to
the em28xx supports the VIDIOC_INT_RESET call (for example: the cx25840 driver)
Thomas Gleixner [Sat, 20 Dec 2008 20:27:34 +0000 (21:27 +0100)]
Null pointer deref with hrtimer_try_to_cancel()
Impact: Prevent kernel crash with posix timer clockid CLOCK_MONOTONIC_RAW
commit 2d42244ae71d6c7b0884b5664cf2eda30fb2ae68 (clocksource:
introduce CLOCK_MONOTONIC_RAW) introduced a new clockid, which is only
available to read out the raw not NTP adjusted system time.
The above commit did not prevent that a posix timer can be created
with that clockid. The timer_create() syscall succeeds and initializes
the timer to a non existing hrtimer base. When the timer is deleted
either by timer_delete() or by the exit() cleanup the kernel crashes.
Prevent the creation of timers for CLOCK_MONOTONIC_RAW by setting the
posix clock function to no_timer_create which returns an error code.
Reported-and-tested-by: Eric Sesterhenn <snakebyte@gmx.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Sat, 20 Dec 2008 19:07:31 +0000 (11:07 -0800)]
Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs:
fs/9p: change simple_strtol to simple_strtoul
9p: convert d_iname references to d_name.name
9p: Remove potentially bad parameter from function entry debug print.
Linus Torvalds [Sat, 20 Dec 2008 19:07:18 +0000 (11:07 -0800)]
Merge branch 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
x86: fix resume (S2R) broken by Intel microcode module, on A110L
x86 gart: don't complain if no AMD GART found
AMD IOMMU: panic if completion wait loop fails
AMD IOMMU: set cmd buffer pointers to zero manually
x86: re-enable MCE on secondary CPUS after suspend/resume
AMD IOMMU: allocate rlookup_table with __GFP_ZERO
might hang upon resuming, OTOH it should have likely hanged each and every time.
(1) possible deadlock in microcode_resume_cpu() if either 'if' section is
taken;
(2) now, I don't see it in spec. and can't experimentally verify it (newer
ucodes don't seem to be available for my Core2duo)... but logically-wise, I'd
think that when read upon resuming, the 'microcode revision' (MSR 0x8B) should
be back to its original one (we need to reload ucode anyway so it doesn't seem
logical if a cpu doesn't drop the version)... if so, the comparison with
memcmp() for the full 'struct cpu_signature' is wrong... and that's how one of
the aforementioned 'if' sections might have been triggered - leading to a
deadlock.
Obviously, in my tests I simulated loading/resuming with the ucode of the same
version (just to see that the file is loaded/re-loaded upon resuming) so this
issue has never popped up.
I'd appreciate if someone with an appropriate system might give a try to the
2nd patch (titled "fix a comparison && deadlock...").
In any case, the deadlock situation is a must-have fix.
* git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-rc-fixes-2.6:
[SCSI] mpt fusion: clear list of outstanding commands on host reset
[SCSI] scsi_lib: only call scsi_unprep_request() under queue lock
[SCSI] ibmvstgt: move crq_queue_create to the end of initialization
[SCSI] libiscsi REGRESSION: fix passthrough support with older iscsi tools
[SCSI] aacraid: disable Dell Percraid quirk on Adaptec 2200S and 2120S
Linus Torvalds [Fri, 19 Dec 2008 19:36:04 +0000 (11:36 -0800)]
Merge branch 'drm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6
* 'drm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6:
drm/i915: GEM on PAE has problems - disable it for now.
drm/i915: Don't return busy for buffers left on the flushing list.
Stanley Miao [Fri, 19 Dec 2008 14:08:22 +0000 (22:08 +0800)]
ALSA: Fix a Oops bug in omap soc driver.
There will be a Oops or frequent underrun messages when playing music with
omap soc driver, this is because a data region is incorretly sized, other data
region will be overwriten when writing to this data region.
Signed-off-by: Stanley Miao <stanley.miao@windriver.com> Acked-by: Jarkko Nikula <jarkko.nikula@nokia.com> Cc: stable@kernel.org Signed-off-by: Takashi Iwai <tiwai@suse.de>
Takashi Iwai [Fri, 19 Dec 2008 13:02:32 +0000 (14:02 +0100)]
ALSA: hda - Remove non-working headphone control for Dell laptops
The previous commit re-enabled hp_nid setup for IDT92HD73*, but
it's unneeded indeed for Dell laptops that have multiple headphones.
Setting the extra hp_nid results in a non-working "Headpohne" mixer
control. Thus hp_nid should be 0 for these dell models.
Also, the automatic addition of hp_nid should check whether it's
a dual-HP model or not. For dual-HPs, the pins are already checked
by the early workaround.
Wu Fengguang [Wed, 26 Nov 2008 06:35:22 +0000 (14:35 +0800)]
ACPI: don't cond_resched() when irqs_disabled()
The ACPI interpreter usually runs with irqs enabled.
However, during suspend/resume it runs with
irqs disabled to evaluate _GTS/_BFS, as well as
by irqrouter_resume() which evaluates _CRS, _PRS, _SRS.
http://bugzilla.kernel.org/show_bug.cgi?id=12252
Signed-off-by: Wu Fengguang <wfg@linux.intel.com> Signed-off-by: Len Brown <len.brown@intel.com>
Bjorn Helgaas [Thu, 13 Nov 2008 23:30:13 +0000 (17:30 -0600)]
ACPI: fix 2.6.28 acpi.debug_level regression
acpi_early_init() was changed to over-write the cmdline param,
making it really inconvenient to set debug flags at boot-time.
Also,
This sets the default level to "info", which is what all the ACPI
drivers use. So to enable messages from drivers, you only have to
supply the "layer" (a.k.a. "component"). For non-"info" ACPI core
and ACPI interpreter messages, you have to supply both level and
layer masks, as before.
Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Signed-off-by: Len Brown <len.brown@intel.com>
Takashi Iwai [Wed, 17 Dec 2008 13:51:01 +0000 (14:51 +0100)]
ALSA: hda - Add no-jd model for IDT 92HD73xx
Added the model without the jack-detection for some desktops that
have really no jack-detection. The recent driver caused regressions
regarding the sound output on such machines.
cciss: fix problem that deleting multiple logical drives could cause a panic
Fix problem that deleting multiple logical drives could cause a panic.
It fixes a panic which can be easily reproduced in the following way: Just
create several "arrays," each with multiple logical drives via hpacucli,
then delete the first array, and it will blow up in deregister_disk(), in
the call to get_host() when it tries to dig the hba pointer out of a NULL
queue pointer.
The problem has been present since my code to make rebuild_lun_table
behave better went in.
Signed-off-by: Stephen M. Cameron <scameron@beardog.cca.cpqcorp.net> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Dave Airlie [Fri, 19 Dec 2008 05:38:34 +0000 (15:38 +1000)]
drm/i915: GEM on PAE has problems - disable it for now.
On PAE systems, GEM allocates pages using shmem, and passes these
pages to be bound into AGP, however the AGP interfaces + the x86
set_memory interfaces all take unsigned long not dma_addr_t.
The initial fix for this was a mess, so we need to do this correctly
for 2.6.29.
Eric Anholt [Mon, 15 Dec 2008 03:05:04 +0000 (19:05 -0800)]
drm/i915: Don't return busy for buffers left on the flushing list.
These buffers don't have active rendering still occurring to them, they just
need either a flush to be emitted or a retire_requests to occur so that we
notice they're done. Return unbusy so that one of the two occurs. The two
expected consumers of this interface (OpenGL and libdrm_intel BO cache) both
want this behavior.
Signed-off-by: Eric Anholt <eric@anholt.net> Acked-by: Keith Packard <keithp@keithp.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
NeilBrown [Fri, 19 Dec 2008 05:25:01 +0000 (16:25 +1100)]
md: Don't read past end of bitmap when reading bitmap.
When we read the write-intent-bitmap off the device, we currently
read a whole number of pages.
When PAGE_SIZE is 4K, this works due to the alignment we enforce
on the superblock and bitmap.
When PAGE_SIZE is 64K, this case read past the end-of-device
which causes an error.
When we write the superblock, we ensure to clip the last page
to just be the required size. Copy that code into the read path
to just read the required number of sectors.
Signed-off-by: Neil Brown <neilb@suse.de> Cc: stable@kernel.org
James Chapman [Wed, 17 Dec 2008 12:02:16 +0000 (12:02 +0000)]
ppp: fix segfaults introduced by netdev_priv changes
This patch fixes a segfault in ppp_shutdown_interface() and
ppp_destroy_interface() when a PPP connection is closed. I bisected
the problem to the following commit:
netdevice ppp: Convert directly reference of netdev->priv
1. Use netdev_priv(dev) to replace dev->priv.
2. Alloc netdev's private data by alloc_netdev().
Signed-off-by: Wang Chen <wangchen@cn.fujitsu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
The original ppp_generic code treated the netdev and struct ppp as
independent data structures which were freed separately. In moving the
ppp struct into the netdev, it is now possible for the private data to
be freed before the call to ppp_shutdown_interface(), which is bad.
The kfree(ppp) in ppp_destroy_interface() is also wrong; presumably
ppp hasn't worked since the above commit.
The following patch fixes both problems.
Signed-off-by: James Chapman <jchapman@katalix.com> Reviewed-by: Wang Chen <wangchen@cn.fujitsu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Wei Yongjun [Fri, 19 Dec 2008 03:35:10 +0000 (19:35 -0800)]
net: Fix module refcount leak in kernel_accept()
The kernel_accept() does not hold the module refcount of newsock->ops->owner,
so we need __module_get(newsock->ops->owner) code after call kernel_accept()
by hand.
In sunrpc, the module refcount is missing to hold. So this cause kernel panic.
Used following script to reproduct:
while [ 1 ];
do
mount -t nfs4 192.168.0.19:/ /mnt
touch /mnt/file
umount /mnt
lsmod | grep ipv6
done
This patch fixed the problem by add __module_get(newsock->ops->owner) to
kernel_accept(). So we do not need to used __module_get(newsock->ops->owner)
in every place when used kernel_accept().
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
John McCutchan [Thu, 18 Dec 2008 01:43:02 +0000 (17:43 -0800)]
Maintainer email fixes for inotify
Update John McCutchan and Robert Love's email addresses for
maintenance of inotify
Signed-off-by: John McCutchan <john@johnmccutchan.com> Acked-by: Robert Love <rlove@rlove.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>