Jean Delvare [Wed, 20 Mar 2013 23:35:19 +0000 (10:35 +1100)]
hwmon: (lm75) Tune resolution and sample time per chip
Most LM75-compatible chips can either sample much faster or with a
much better resolution than the original LM75 chip. So far the lm75
driver did not let the user take benefit of these improvements. Do it
now.
I decided to almost always configure the chip to use the best
resolution possible, which also means the longest sample time. The
only chips for which I didn't are the DS75, DS1775 and STDS75, because
they are really too slow in 12-bit mode (1.2 to 1.5 second worst case)
so I went for 11-bit mode as a more reasonable tradeoff. This choice is
dictated by the fact that the hwmon subsystem is meant for system
monitoring, it has never been supposed to be ultra-fast, and as a
matter of fact we do cache the sampled values in almost all drivers.
If anyone isn't pleased with these default settings, they can always
introduce a platform data structure or DT support for the lm75. That
being said, it seems nobody ever complained that the driver wouldn't
refresh the value faster than every 1.5 second, and the change made
it faster for all chips even in 12-bit mode, so I don't expect any
complaint.
Signed-off-by: Jean Delvare <khali@linux-fr.org> Acked-by: Guenter Roeck <linux@roeck-us.net>
Jean Delvare [Wed, 20 Mar 2013 23:35:18 +0000 (10:35 +1100)]
hwmon: (lm75) Prepare to support per-chip resolution and sample time
Prepare the lm75 driver to support per-chip resolution and sample
time. For now we only make the code generic enough to support it, but
we still use the same, unchanged resolution (9-bit) and sample time
(1.5 s) for all chips.
Signed-off-by: Jean Delvare <khali@linux-fr.org> Acked-by: Guenter Roeck <linux@roeck-us.net>
There is no standard for the configuration register bits of LM75-like
chips. We shouldn't blindly clear bits setting the resolution as they
are either unused or used for something else on some of the supported
chips.
So, switch to per-chip configuration initialization. This will allow
for better tuning later, for example using more resolution bits when
available.
Signed-off-by: Jean Delvare <khali@linux-fr.org> Acked-by: Guenter Roeck <linux@roeck-us.net>
Alan Stern [Wed, 20 Mar 2013 19:07:26 +0000 (15:07 -0400)]
USB: EHCI: fix regression in QH unlinking
This patch (as1670) fixes a regression caused by commit 6402c796d3b4205d3d7296157956c5100a05d7d6 (USB: EHCI: work around
silicon bug in Intel's EHCI controllers). The workaround goes through
two IAA cycles for each QH being unlinked. During the first cycle,
the QH is not added to the async_iaa list (because it isn't fully gone
from the hardware yet), which means that list will be empty.
Unfortunately, I forgot to update the IAA watchdog timer routine. It
thinks that an empty async_iaa list means the timer expiration was an
error, which isn't true any more. This problem didn't show up during
initial testing because the controllers being tested all had working
IAA interrupts. But not all controllers do, and when the watchdog
timer expires, the empty-list check prevents the second IAA cycle from
starting. As a result, URB unlinks never complete. The check needs
to be removed.
Among the symptoms of the regression are processes stuck in D wait
states and hangs during system shutdown.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Reported-and-tested-by: Stephen Warren <swarren@wwwdotorg.org> Reported-and-tested-by: Sven Joachim <svenjoac@gmx.de> Reported-by: Andreas Bombe <aeb@debian.org> Cc: stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ben Widawsky [Wed, 20 Mar 2013 03:19:56 +0000 (20:19 -0700)]
drm/i915: Correct sandybrige overclocking
Change the gen6+ max delay if the pcode read was successful (not the
inverse).
The previous code was all sorts of wrong and has existed since I broke
it:
commit 42c0526c930523425ff6edc95b7235ce7ab9308d
Author: Ben Widawsky <ben@bwidawsk.net>
Date: Wed Sep 26 10:34:00 2012 -0700
drm/i915: Extract PCU communication
I added some parentheses for clarity, and I also corrected the debug
message message to use the mask (wrong before I came along) and added a
print to show the value we're changing from.
Looking over the code, I'm not actually sure what we're trying to do. I
introduced the bug simply by extracting the function not implementing
anything new. We already set max_delay based on the capabilities
register (which is what we use elsewhere to determine min and max).
This would potentially increase it, I suppose? Jesse, I can't find the
document which explains the definitions of the pcode commands, maybe you
have it around.
Based on Jesse's response, this could potentially be for -fixes, or
stable, or maybe lead to us dropping it entirely. As the current code is
is, things won't completely break because of the aforementioned
capabilities register, and in my experimentation, enabling this has no
effect, it goes from 1100->1100.
I found this while reviewing Jesse's VLV patches.
Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
[danvet: Bikeshed-away the redudant parens spotted by Chris Wilson.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Paolo Bonzini [Tue, 19 Mar 2013 15:30:26 +0000 (16:30 +0100)]
KVM: x86: correctly initialize the CS base on reset
The CS base was initialized to 0 on VMX (wrong, but usually overridden
by userspace before starting) or 0xf0000 on SVM. The correct value is
0xffff0000, and VMX is able to emulate it now, so use it.