Chris Wilson [Wed, 13 Jul 2016 17:34:44 +0000 (18:34 +0100)]
drm/i915/fbdev: Drain the suspend worker on retiring
Since the suspend_work can arm itself if the console_lock() is currently
held elsewhere, simply calling flush_work() doesn't guarantee that the
work is idle upon return. To do so requires using cancel_work_sync().
drm/i915: add missing condition for committing planes on crtc
The i915 driver checks for color management properties changes as part
of a plane update. Therefore a color management update must imply a
plane update, otherwise we never update the transformation matrixes
and degamma/gamma LUTs.
v2: add comment about moving the commit of color management registers
to an async worker
v3: Commit color management register right after vblank
v4: Move back color management commit condition together with planes
commit
v5: Trigger color management commit through the planes commit (Daniel)
v6: Make plane change update more readable
Fixes: 20a34e78f0d7 (drm/i915: Update color management during vblank evasion.) Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: drm-intel-fixes@lists.freedesktop.org Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
References: https://lkml.org/lkml/2016/7/14/614 Reviewed-and-tested-by: Mario Kleiner <mario.kleiner.de@gmail.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1464183041-8478-1-git-send-email-lionel.g.landwerlin@intel.com
Lyude [Tue, 21 Jun 2016 21:03:44 +0000 (17:03 -0400)]
drm/i915: Enable polling when we don't have hpd
Unfortunately, there's two situations where we lose hpd right now:
- Runtime suspend
- When we've shut off all of the power wells on Valleyview/Cherryview
While it would be nice if this didn't cause issues, this has the
ability to get us in some awkward states where a user won't be able to
get their display to turn on. For instance; if we boot a Valleyview
system without any monitors connected, it won't need any of it's power
wells and thus shut them off. Since this causes us to lose HPD, this
means that unless the user knows how to ssh into their machine and do a
manual reprobe for monitors, none of the monitors they connect after
booting will actually work.
Eventually we should come up with a better fix then having to enable
polling for this, since this makes rpm a lot less useful, but for now
the infrastructure in i915 just isn't there yet to get hpd in these
situations.
Changes since v1:
- Add comment explaining the addition of the if
(!mode_config->poll_running) in intel_hpd_init()
- Remove unneeded if (!dev->mode_config.poll_enabled) in
i915_hpd_poll_init_work()
- Call to drm_helper_hpd_irq_event() after we disable polling
- Add cancel_work_sync() call to intel_hpd_cancel_work()
Changes since v2:
- Apparently dev->mode_config.poll_running doesn't actually reflect
whether or not a poll is currently in progress, and is actually used
for dynamic module paramter enabling/disabling. So now we instead
keep track of our own poll_running variable in dev_priv->hotplug
- Clean i915_hpd_poll_init_work() a little bit
Changes since v3:
- Remove the now-redundant connector loop in intel_hpd_init(), just
rely on intel_hpd_poll_enable() for setting connector->polled
correctly on each connector
- Get rid of poll_running
- Don't assign enabled in i915_hpd_poll_init_work before we actually
lock dev->mode_config.mutex
- Wrap enabled assignment in i915_hpd_poll_init_work() in READ_ONCE()
for doc purposes
- Do the same for dev_priv->hotplug.poll_enabled with WRITE_ONCE in
intel_hpd_poll_enable()
- Add some comments about racing not mattering in intel_hpd_poll_enable
Changes since v4:
- Rename intel_hpd_poll_enable() to intel_hpd_poll_init()
- Drop the bool argument from intel_hpd_poll_init()
- Remove redundant calls to intel_hpd_poll_init()
- Rename poll_enable_work to poll_init_work
- Add some kerneldoc for intel_hpd_poll_init()
- Cross-reference intel_hpd_poll_init() in intel_hpd_init()
- Just copy the loop from intel_hpd_init() in intel_hpd_poll_init()
Changes since v5:
- Minor kerneldoc nitpicks
Cc: stable@vger.kernel.org Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Lyude <cpaul@redhat.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Lyude [Tue, 21 Jun 2016 21:03:43 +0000 (17:03 -0400)]
drm/i915/vlv: Disable HPD in valleyview_crt_detect_hotplug()
One of the things preventing us from using polling is the fact that
calling valleyview_crt_detect_hotplug() when there's a VGA cable
connected results in sending another hotplug. With polling enabled when
HPD is disabled, this results in a scenario like this:
- We enable power wells and reset the ADPA
- output_poll_exec does force probe on VGA, triggering a hpd
- HPD handler waits for poll to unlock dev->mode_config.mutex
- output_poll_exec shuts off the ADPA, unlocks dev->mode_config.mutex
- HPD handler runs, resets ADPA and brings us back to the start
This results in an endless irq storm getting sent from the ADPA
whenever a VGA connector gets detected in the middle of polling.
Somewhat based off of the "drm/i915: Disable CRT HPD around force
trigger" patch Ville Syrjälä sent a while back
Cc: stable@vger.kernel.org Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Lyude <cpaul@redhat.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Lyude [Tue, 21 Jun 2016 21:03:42 +0000 (17:03 -0400)]
drm/i915/vlv: Reset the ADPA in vlv_display_power_well_init()
While VGA hotplugging worked(ish) before, it looks like that was mainly
because we'd unintentionally enable it in
valleyview_crt_detect_hotplug() when we did a force trigger. This
doesn't work reliably enough because whenever the display powerwell on
vlv gets disabled, the values set in VLV_ADPA get cleared and
consequently VGA hotplugging gets disabled. This causes bugs such as one
we found on an Intel NUC, where doing the following sequence of
hotplugs:
Would result in VGA hotplugging becoming disabled, due to the powerwells
getting toggled in the process of connecting HDMI.
Changes since v3:
- Expose intel_crt_reset() through intel_drv.h and call that in
vlv_display_power_well_init() instead of
encoder->base.funcs->reset(&encoder->base);
Changes since v2:
- Use intel_encoder structs instead of drm_encoder structs
Changes since v1:
- Instead of handling the register writes ourself, we just reuse
intel_crt_detect()
- Instead of resetting the ADPA during display IRQ installation, we now
reset them in vlv_display_power_well_init()
Cc: stable@vger.kernel.org Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Lyude <cpaul@redhat.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
[danvet: Rebase over dev_priv/drm_device embedding.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Chris Wilson [Wed, 13 Jul 2016 08:10:37 +0000 (09:10 +0100)]
drm/i915: Defer enabling rc6 til after we submit the first batch/context
Some hardware requires a valid render context before it can initiate
rc6 power gating of the GPU; the default state of the GPU is not
sufficient and may lead to undefined behaviour. The first execution of
any batch will load the "golden render state", at which point it is safe
to enable rc6. As we do not forcibly load the kernel context at resume,
we have to hook into the batch submission to be sure that the render
state is setup before enabling rc6.
However, since we don't enable powersaving until that first batch, we
queued a delayed task in order to guarantee that the batch is indeed
submitted.
v2: Rearrange intel_disable_gt_powersave() to match.
v3: Apply user specified cur_freq (or idle_freq if not set).
v4: Give in, and supply a delayed work to autoenable rc6
v5: Mika suggested a couple of better names for delayed_resume_work
v6: Rebalance rpm_put around the autoenable task
Chris Wilson [Wed, 13 Jul 2016 08:10:35 +0000 (09:10 +0100)]
drm/i915: Define a separate variable and control for RPS waitboost frequency
To allow the user finer control over waitboosting, allow them to set the
frequency we request for the boost. This also them allows to effectively
disable the boosting by setting the boost request to a low frequency.
Chris Wilson [Wed, 13 Jul 2016 08:10:32 +0000 (09:10 +0100)]
drm/i915: Preserve current RPS frequency across init
Select idle frequency during initialisation, then reset the last known
frequency when re-enabling. This allows us to preserve the user selected
frequency across resets.
v2: Stop CHV from overriding the user's choice in cherryview_enable_rps()
Chris Wilson [Wed, 13 Jul 2016 08:10:31 +0000 (09:10 +0100)]
drm/i915: Flush GT idle status upon reset
Upon resetting the GPU, we force the engines to be idle by clearing
their request lists. However, I neglected to clear the GT active status
and so the next request following the reset was not marking the device
as busy again. (We had to wait until any outstanding retire worker
finally ran and cleared the active status.)
Ville Syrjälä [Tue, 12 Jul 2016 12:00:37 +0000 (15:00 +0300)]
drm/i915: Ignore panel type from OpRegion on SKL
Dell XPS 13 9350 apparently doesn't like it when we use the panel type
from OpRegion. The OpRegion panel type (0) tells us to use use low
vswing for eDP, whereas the VBT panel type (2) tells us to use normal
vswing. The problem is that low vswing results in some display flickers.
Since no one seems to know how this stuff is supposed to be handled,
let's just ignore the OpRegion panel type on SKL for now.
v2: Print the panel type correctly in the debug output
Reported-by: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: drm-intel-fixes@lists.freedesktop.org
References: https://lists.freedesktop.org/archives/intel-gfx/2016-June/098826.html Fixes: a05628195a0d ("drm/i915: Get panel_type from OpRegion panel details") Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1468324837-29237-1-git-send-email-ville.syrjala@linux.intel.com Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Tested-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Chris Wilson <chris-wilson.co.uk>
With the unified common engine setup done, and the execlist engine
initialization loop clearly split into two phases, we can eliminate
the separate legacy engine initialization code.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Chris Wilson <chris-wilson.co.uk>
Move the execlist engine setup to vfuncs so that the engine
init loop is clearly split into the mode agnostic and
specific steps.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Chris Wilson <chris-wilson.co.uk>
Dave Gordon [Wed, 13 Jul 2016 15:03:35 +0000 (16:03 +0100)]
drm/i915: unify first-stage engine struct setup
intel_lrc.c has a table of "logical rings" (meaning engines), while
intel_ringbuffer.c has separately open-coded initialisation for each
engine. We can deduplicate this somewhat by using the same first-stage
engine-setup function for both modes.
So here we expose the function that transfers information from the
static table of (all) known engines to the dev_priv->engine array of
engines available on this device (adjusting the names along the way)
and then embed calls to it in both the LRC and the legacy-mode setup.
Signed-off-by: Dave Gordon <david.s.gordon@intel.com> Reviewed-by: Chris Wilson <chris-wilson.co.uk> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Ville Syrjälä [Tue, 12 Jul 2016 16:24:47 +0000 (19:24 +0300)]
drm/i915: Unbreak interrupts on pre-gen6
Prior to gen6 we didn't have per-ring IMR registers, which means that
since commit 61ff75ac20ff ("drm/i915: Simplify enabling
user-interrupts with L3-remapping") we're now masking off all interrupts
when init_render_ring() gets called. That's rather rude. Let's limit
the ring IMR frobbing to machines that actually have the per-ring IMR
registers.
Chris Wilson [Tue, 12 Jul 2016 11:55:29 +0000 (12:55 +0100)]
drm/i915: Provide argument names for static stubs
Make sure we keep kbuilder happy in all of its random configs by
providing argument names for compile-time stubs.
In file included from drivers/gpu/drm/i915/intel_dp_mst.c:27:0:
drivers/gpu/drm/i915/i915_drv.h: In function 'i915_debugfs_register':
>> drivers/gpu/drm/i915/i915_drv.h:3612:48: error: parameter name omitted
static inline int i915_debugfs_register(struct drm_i915_private *) {return 0;}
^~~~~~~~~~~~~~~~
drivers/gpu/drm/i915/i915_drv.h: In function 'i915_debugfs_unregister':
drivers/gpu/drm/i915/i915_drv.h:3613:51: error: parameter name omitted
static inline void i915_debugfs_unregister(struct drm_i915_private *) {}
Chris Wilson [Mon, 11 Jul 2016 13:46:17 +0000 (14:46 +0100)]
drm/i915: Update ifdeffery for mutex->owner
In commit 7608a43d8f2e ("locking/mutexes: Use MUTEX_SPIN_ON_OWNER when
appropriate") the owner field in the mutex was updated from being
dependent upon CONFIG_SMP to using optimistic spin. Update our peek
function to suite.
Now that the last couple of hacks have been removed from the runtime
powermanagement users, we can fully enable the asserts by preventing the
temptation to disable them when our code is buggy.
Chris Wilson [Sat, 9 Jul 2016 09:12:06 +0000 (10:12 +0100)]
drm/i915: Kick hangcheck from retire worker
Let's ensure that we cannot run indefinitely without the hangcheck
worker being queued. We removed it from being kicked on every request
because we were kicking it a few millions times in every hangcheck
interval and only once is necessary! However, that leaves us with the
issue of what if userspace never waits for a request, or runs out of
resources, what if userspace just issues a request then spins on
BUSY_IOCTL?
Chris Wilson [Sat, 9 Jul 2016 09:12:05 +0000 (10:12 +0100)]
drm/i915/breadcrumbs: Queue hangcheck before sleeping
Never go to sleep waiting on the GPU without first ensuring that we will
get woken up.
We have a choice of queuing the hangcheck before every schedule() or the
first time we wakeup. In order to simply accommodate both the signaler
and the ordinary waiter, move the queuing to the common point of
enabling the irq. We lose the paranoid safety of ensuring that the
hangcheck is active before the sleep, but avoid code duplication (and
redundant hangcheck queuing).
Chris Wilson [Fri, 24 Jun 2016 13:07:14 +0000 (14:07 +0100)]
drm/i915: Fill unused GGTT with scratch pages for VT-d
One of the numerous VT-d workarounds we require is that the display
hardware reads past the end of the buffer triggering VT-d faults. This
is acknowledged in the code as being safe "since we fill the unused
portions of the GGTT with the scratch page". Alas, that is no longer
always true and so we trigger DMAR read faults.
Skylake also requires another workaround to avoid mixing VT-d and
unpopulated PTE, and so there we also need to ensure we fill unused
entries with the scratch page.
Reported-by: Mike Lothian <mike@fireburn.co.uk>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96584 Fixes: f7770bfd9fd2 ("drm/i915: Skip clearing the GGTT on full-ppgtt systems") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: David Weinehall <david.weinehall@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1466773634-8106-1-git-send-email-chris@chris-wilson.co.uk Reviewed-by: David Weinehall <david.weinehall@intel.com>
drm/i915: Introduce Kabypoint PCH for Kabylake H/DT.
Some Kabylake SKUs are going to use Kabypoint PCH.
It is mainly for Halo and DT ones.
>From our specs it doesn't seem that KBP brings
any change on the display south engine. So let's consider
this as a continuation of SunrisePoint, i.e., SPT+.
Since it is easy to get confused by a letter change:
KBL = Kabylake - CPU/GPU codename.
KBP = Kabypoint - PCH codename.
Ville Syrjälä [Wed, 22 Jun 2016 18:57:09 +0000 (21:57 +0300)]
drm/i915: Check for invalid cloning earlier during modeset
Move the encoder cloning check to happen earlier in the modeset. The
main benefit will be that the debug output from a failed modeset will
be less confusing as output_types can not indicate an invalid
configuration during the later computation stages.
For instance, what happened to me was kms_setmode was attempting one
of its invalid cloning checks during which it asked for DP+VGA cloning
on HSW. In this case the DP .compute_config() was executed after
the FDI .compute_config() leaving the DP link clock (1.62 in this case)
in port_clock, and then later the FDI BW computation tried to use that
as the FDI link clock (which should always be 2.7). 1.62 x 2 wasn't
enough for the mode it was trying to use, and so it ended up rejecting
the modeset, not because of an invalid cloning configuration, but
because of supposedly running out of FDI bandwidth. Took me a while
to figure out what had actually happened.
Ville Syrjälä [Wed, 22 Jun 2016 18:57:07 +0000 (21:57 +0300)]
drm/i915: Kill has_dsi_encoder
has_dsi_encoder was introduced to indicate that the pipe is driving
a DSI encoder. Now that we have the output_types bitmask that can
tell us the same thing, let's just kill has_dsi_encoder.
INTEL_OUTPUT_DISPLAYPORT hsa been bugging me for a long time. It always
looks out of place besides INTEL_OUTPUT_EDP and INTEL_OUTPUT_DP_MST.
Let's just rename it to INTEL_OUTPUT_DP.
Ville Syrjälä [Wed, 22 Jun 2016 18:57:05 +0000 (21:57 +0300)]
drm/i915: Replace some open coded intel_crtc_has_dp_encoder()s
A bunch of places still look for DP encoders manually. Just call
intel_crtc_has_dp_encoder(). Note that many of these places don't
look for EDP or DP_MST, but it's still fine to replace them because
* for audio we don't enable audio on eDP anyway
* the code that lack DP MST check is only for plaforms that
don't support MST anyway
Ville Syrjälä [Wed, 22 Jun 2016 18:57:04 +0000 (21:57 +0300)]
drm/i915: Kill has_dp_encoder from pipe_config
Use the new output_types bitmask instead of has_dp_encoder.
To make it less oainlful provide a small helper
(intel_crtc_has_dp_encoder()) to do the bitsy stuff.
Ville Syrjälä [Wed, 22 Jun 2016 18:57:03 +0000 (21:57 +0300)]
drm/i915: Replace manual lvds and sdvo/hdmi counting with intel_crtc_has_type()
Since we now have the output_types bitmaks in the crtc state, there's no
need to iterate through all the encoders to see if an LVDS or SDVO/HDMI
encoder might be present.
Ville Syrjälä [Wed, 22 Jun 2016 18:57:02 +0000 (21:57 +0300)]
drm/i915: Unify intel_pipe_has_type() and intel_pipe_will_have_type()
With the introduction of the output_types mask, intel_pipe_has_type()
and intel_pipe_will_have_type() are basically the same thing. Replace
them with a new intel_crtc_has_type() (identical to
intel_pipe_will_have_type() actually).
v2: Rebase
v3: Make intel_crtc_has_type() static inline (Chris)
Ville Syrjälä [Wed, 22 Jun 2016 18:57:01 +0000 (21:57 +0300)]
drm/i915: Add output_types bitmask into the crtc state
Rather than looping through encoders to see which encoder types
are being driven by the pipe, add an output_types bitmask into
the crtc state and populate it prior to compute_config and during
state readout.
v2: Determine output_types before .compute_config() hooks are called
Ville Syrjälä [Wed, 22 Jun 2016 18:56:59 +0000 (21:56 +0300)]
drm/i915: Don't mark eDP encoders as MST capable
If we've determined that the encoder is eDP, we shouldn't try to use MST
on it. Or at least the code doesn't seem to expect that since there are
some type==DP checks in the MST code.
Dave Gordon [Wed, 6 Jul 2016 14:30:11 +0000 (15:30 +0100)]
drm/i915: avoid wait_for_atomic() in non-atomic host2guc_action()
Rather than using wait_for_atomic() when chacking for a response from
the GuC, we can get the effect of a hybrid spin/sleep wait by breaking
it into two stages. First, spin-wait for up to 10us to minimise latency
for "quick" commands; then, if that times out, sleep-wait for up 10ms
(the maximum allowed for a "slow" command).
Being able to do this depends on the recent patch 18f4b84 drm/i915: Use atomic waits for short non-atomic ones
and is similar to the hybrid approach in 1758b90 drm/i915: Use a hybrid scheme for fast register waits
(although we can't use that as-is, because that interface doesn't quite
match what we need here).
Chris Wilson [Wed, 6 Jul 2016 11:39:02 +0000 (12:39 +0100)]
drm/i915: Group the irq breadcrumb variables into the same cacheline
As we inspect both the tasklet (to check for an active bottom-half) and
set the irq-posted flag at the same time (both in the interrupt handler
and then in the bottom-halt), group those two together into the same
cacheline. (Not having total control over placement of the struct means
we can't guarantee the cacheline boundary, we need to align the kmalloc
and then each struct, but the grouping should help.)
v2: Try a couple of different names for the state touched by the user
interrupt handler.
Chris Wilson [Wed, 6 Jul 2016 11:39:01 +0000 (12:39 +0100)]
drm/i915: Wake up the bottom-half if we steal their interrupt
Following on from the scenario Tvrtko envisioned to explain a hard-to-hit
race with multiple first waiters, we could also then race in the
__i915_request_irq_complete() and the bottom-half may miss the vital
irq-seqno barrier and so go to sleep not noticing their seqno is
complete.
Chris Wilson [Wed, 6 Jul 2016 11:39:00 +0000 (12:39 +0100)]
drm/i915: Always double check for a missed interrupt for new bottom halves
After assigning ourselves as the new bottom-half, we must perform a
cursory check to prevent a missed interrupt. Either we miss the interrupt
whilst programming the hardware, or if there was a previous waiter (for
a later seqno) they may be woken instead of us (due to the inherent race
in the unlocked read of b->tasklet in the irq handler) and so we miss the
wake up.
Spotted-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96806 Fixes: 688e6c725816 ("drm/i915: Slaughter the thundering... herd") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1467805142-22219-1-git-send-email-chris@chris-wilson.co.uk
Chris Wilson [Tue, 5 Jul 2016 09:40:23 +0000 (10:40 +0100)]
drm/i915: Convert dev_priv->dev backpointers to dev_priv->drm
Since drm_i915_private is now a subclass of drm_device we do not need to
chase the drm_i915_private->dev backpointer and can instead simply
access drm_i915_private->drm directly.
text data bss dec hex filename 1068757 4565 416 1073738 10624a drivers/gpu/drm/i915/i915.ko 1066949 4565 416 1071930 105b3a drivers/gpu/drm/i915/i915.ko
Created by the coccinelle script:
@@
struct drm_i915_private *d;
identifier i;
@@
(
- d->dev->i
+ d->drm.i
|
- d->dev
+ &d->drm
)
and for good measure the dev_priv->dev backpointer was removed entirely.
Chris Wilson [Tue, 5 Jul 2016 09:40:21 +0000 (10:40 +0100)]
drm/i915: Remove use of dev_priv->dev backpointer in __i915_printk()
As we can just directly use drm_dev->drm.dev, we do not need the
drm_dev->dev backpointer anymore and can also loose the warning about
order of __i915_printk() and our initialisation (which is now always
safe).
Chris Wilson [Mon, 4 Jul 2016 07:48:33 +0000 (08:48 +0100)]
drm/i915: Skip capturing an error state if we already have one
As we only ever keep the first error state around, we can avoid some
work that can be quite intrusive if we don't record the error the second
time around. This does move the race whereby the user could discard one
error state as the second is being captured, but that race exists in the
current code and we hope that recapturing error state is only done for
debugging.
Note that as we discard the error state for simulated errors, igt that
exercise error capture continue to function.
Chris Wilson [Tue, 5 Jul 2016 07:54:36 +0000 (08:54 +0100)]
drm/i915: Replace lockless_dereference(bool) with READ_ONCE()
After Joonas complained about using READ_ONCE() on the only use of the
variable in the function, where the intent was to simply document that
the read was intentionally racy and unlocked, I switched the READ_ONCE()
over to lockless_dereference(). However, in linux-next that has a
stronger type-check to only allow pointers and is no longer
interchangeable with READ_ONCE(), see commit 331b6d8c7afc
("locking/barriers: Validate lockless_dereference() is used on a pointer
type")
drm/i915: Explicitly convert some macros to boolean values
Some IS_ and HAS_ macros can return any non-zero value for true.
One potential problem with that is that someone could assign
them to integers and be surprised with the result. Therefore it
is probably safer to do the conversion to 0/1 in the macros
themselves.
Luckily this does not seem to have an effect on code size.
Only one call site was getting bit by this and a patch for
that has been sent as "drm/i915/guc: Protect against HAS_GUC_*
returning true values other than one".
v2: Added some extra braces as suggested by checkpatch.
Chris Wilson [Mon, 4 Jul 2016 10:34:36 +0000 (11:34 +0100)]
drm/i915: Mass convert dev->dev_private to to_i915(dev)
Since we now subclass struct drm_device, we can save pointer dances by
noting the equivalence of struct drm_device and struct drm_i915_private,
i.e. by using to_i915().
text data bss dec hex filename 1073824 4562 416 1078802 107612 drivers/gpu/drm/i915/i915.ko 1068976 4562 416 1073954 106322 drivers/gpu/drm/i915/i915.ko
Chris Wilson [Fri, 17 Jun 2016 13:35:05 +0000 (14:35 +0100)]
drm/i915: Limit i915_ring_test_irq debugfs to actual rings
For simplicity in testing, only report known rings in the mask. This
allows userspace to try and trigger a missed irq on every ring and do a
comparison between i915_ring_test_irq and i915_ring_missed_irq to see if
any rings failed.
v2: Move the debug message to after the rings are selected (so that the
message accurately reflects reality)
Chris Wilson [Sun, 3 Jul 2016 17:29:33 +0000 (18:29 +0100)]
drm/i915: Hold irq uncore.lock when initialising fw_domains
Acquiring the forcewake domain asserts that it is in an atomic section
(as we always expect to be under the uncore.lock). This is true except for
initialising the domains on Ivybridge, and so we generate a warning.
Wrap the manual usage of fw_domains inside the spin_lock.
Chris Wilson [Mon, 4 Jul 2016 07:08:39 +0000 (08:08 +0100)]
drm/i915: Allow userspace to request no-error-capture upon GPU hangs
igt likes to inject GPU hangs into its command streams. However, as we
expect these hangs, we don't actually want them recorded in the dmesg
output or stored in the i915_error_state (usually). To accommodate this
allow userspace to set a flag on the context that any hang emanating
from that context will not be recorded. We still do the error capture
(otherwise how do we find the guilty context and know its intent?) as
part of the reason for random GPU hang injection is to exercise the race
conditions between the error capture and normal execution.
v2: Split out the request->ringbuf error capture changes.
v3: Move the flag defines next to the intel_context->flags definition
Chris Wilson [Mon, 4 Jul 2016 07:08:38 +0000 (08:08 +0100)]
drm/i915: Record the ringbuffer associated with the request
The request tells us where to read the ringbuf from, so use that
information to simplify the error capture. If no request was active at
the time of the hang, the ring is idle and there is no information
inside the ring pertaining to the hang.
Note carefully that this will reduce the amount of information stored in
the error state - any ring without an active request will not be
recorded.
Chris Wilson [Mon, 4 Jul 2016 07:08:37 +0000 (08:08 +0100)]
drm/i915: Remove stop-rings debugfs interface
Now that we have (near) universal GPU recovery code, we can inject a
real hang from userspace and not need any fakery. Not only does this
mean that the testing is far more realistic, but we can simplify the
kernel in the process.
Chris Wilson [Mon, 4 Jul 2016 07:08:36 +0000 (08:08 +0100)]
drm/i915: Flush the RPS bottom-half when the GPU idles
Make sure that the RPS bottom-half is flushed before we set the idle
frequency when we decide the GPU is idle. This should prevent any races
with the bottom-half and setting the idle frequency, and ensures that
the bottom-half is bounded by the GPU's rpm reference taken for when it
is active (i.e. between gen6_rps_busy() and gen6_rps_idle()).
v2: Avoid recursively using the i915->wq - RPS does not touch the
struct_mutex so has no place being on the ordered i915->wq.
v3: Enable/disable interrupts for RPS busy/idle in order to prevent
further HW access from RPS outside of the wakeref.
Chris Wilson [Mon, 4 Jul 2016 07:08:35 +0000 (08:08 +0100)]
drm/i915: Add background commentary to "waitboosting"
Describe the intent of boosting the GPU frequency to maximum before
waiting on the GPU.
RPS waitboosting was introduced with commit b29c19b64528 ("drm/i915:
Boost RPS frequency for CPU stalls") but lacked a concise comment in the
code to explain itself.
Chris Wilson [Mon, 4 Jul 2016 07:08:34 +0000 (08:08 +0100)]
drm/i915: Restore waitboost credit to the synchronous waiter
Ideally, we want to automagically have the GPU respond to the
instantaneous load by reclocking itself. However, reclocking occurs
relatively slowly, and to the client waiting for a result from the GPU,
too late. To compensate and reduce the client latency, we allow the
first wait from a client to boost the GPU clocks to maximum. This
overcomes the lag in autoreclocking, at the expense of forcing the GPU
clocks too high. So to offset the excessive power usage, we currently
allow a client to only boost the clocks once before we detect the GPU
is idle again. This works reasonably for say the first frame in a
benchmark, but for many more synchronous workloads (like OpenCL) we find
the GPU clocks remain too low. By noting a wait which would idle the GPU
(i.e. we just waited upon the last known request), we can give that
client the idle boost credit (for their next wait) without the 100ms
delay required for us to detect the GPU idle state. The intention is to
boost clients that are stalling in the process of feeding the GPU more
work (and who in doing so let the GPU idle), without granting boost
credits to clients that are throttling themselves (such as compositors).
Chris Wilson [Mon, 4 Jul 2016 07:08:33 +0000 (08:08 +0100)]
drm/i915: Remove redundant queue_delayed_work() from throttle ioctl
We know, by design, that whilst the GPU is active (and thus we are
throttling) the retire_worker is queued. Therefore attempting to requeue
it with queue_delayed_work() is a no-op and we can safely remove it.
Chris Wilson [Mon, 4 Jul 2016 07:08:32 +0000 (08:08 +0100)]
drm/i915: Do not keep postponing the idle-work
Rather than persistently postponing the idle-work everytime somebody
calls i915_gem_retire_requests() (potentially ensuring that we never
reach the idle state), queue the work the first time we detect all
requests are complete. Then if in 100ms, more requests have been queued,
we will abort the idle-worker and wait again until all the new requests
have been completed.
Of course, this does depend upon the idle worker cancelling itself
gracefully from the previous patch.
Chris Wilson [Mon, 4 Jul 2016 07:08:31 +0000 (08:08 +0100)]
drm/i915: Only start retire worker when idle
The retire worker is a low frequency task that makes sure we retire
outstanding requests if userspace is being lax. We only need to start it
once as it remains active until the GPU is idle, so do a cheap test
before the more expensive queue_work(). A consequence of this is that we
need correct locking in the worker to make the hot path of request
submission cheap. To keep the symmetry and keep hangcheck strictly bound
by the GPU's wakelock, we move the cancel_sync(hangcheck) to the idle
worker before dropping the wakelock.
v2: Guard against RCU fouling the breadcrumbs bottom-half whilst we kick
the waiter.
v3: Remove the wakeref assertion squelching (now we hold a wakeref for
the hangcheck, any rpm error there is genuine).
v4: To prevent excess work when retiring requests, we split the busy
flag into two, a boolean to denote whether we hold the wakeref and a
bitmask of active engines.
v5: Reorder cancelling hangcheck upon idling to avoid a race where we
might cancel a hangcheck after being preempted by a new task
Chris Wilson [Sat, 2 Jul 2016 14:36:02 +0000 (15:36 +0100)]
drm/i915: Match bitmask size to types in intel_fb_initial_config()
smatch complains of:
drivers/gpu/drm/i915/intel_fbdev.c:403
intel_fb_initial_config() warn: should '1 << i' be a 64 bit type?
drivers/gpu/drm/i915/intel_fbdev.c:422 intel_fb_initial_config() warn:
should '1 << i' be a 64 bit type?
drivers/gpu/drm/i915/intel_fbdev.c:501 intel_fb_initial_config() warn:
should '1 << i' be a 64 bit type?
We are prepared to iterate over a u64 but don't limit the number of
connectors we try to configure to a maximum of 64.
Chris Wilson [Fri, 1 Jul 2016 16:23:29 +0000 (17:23 +0100)]
drm/i915: Remove debug noise on detecting fault-injection of missed interrupts
Since the tests can and do explicitly check debugfs/i915_ring_missed_irqs
for the handling of a "missed interrupt", adding it to the dmesg at INFO
is just noise. When it happens for real, we still class it as an ERROR.
Note that I have chose to remove it entirely because when we detect the
"missed interrupt" is irrelevant and the message contains no more
information than we glean from looking in debugfs.
Chris Wilson [Fri, 1 Jul 2016 16:23:28 +0000 (17:23 +0100)]
drm/i915: Simplify enabling user-interrupts with L3-remapping
Borrow the idea from intel_lrc.c to precompute the mask of interrupts we
wish to always enable to avoid having lots of conditionals inside the
interrupt enabling.
Chris Wilson [Fri, 1 Jul 2016 16:23:27 +0000 (17:23 +0100)]
drm/i915: Move the get/put irq locking into the caller
With only a single callsite for intel_engine_cs->irq_get and ->irq_put,
we can reduce the code size by moving the common preamble into the
caller, and we can also eliminate the reference counting.
For completeness, as we are no longer doing reference counting on irq,
rename the get/put vfunctions to enable/disable respectively and are
able to review the use of posting reads. We only require the
serialisation with hardware when enabling the interrupt (i.e. so we
cannot miss an interrupt by going to sleep before the hardware truly
enables it).
Chris Wilson [Fri, 1 Jul 2016 16:23:26 +0000 (17:23 +0100)]
drm/i915: Embed signaling node into the GEM request
Under the assumption that enabling signaling will be a frequent
operation, lets preallocate our attachments for signaling inside the
(rather large) request struct (and so benefiting from the slab cache).
v2: Convert from void * to more meaningful names and types.
Chris Wilson [Fri, 1 Jul 2016 16:23:25 +0000 (17:23 +0100)]
drm/i915: Convert trace-irq to the breadcrumb waiter
If we convert the tracing over from direct use of ring->irq_get() and
over to the breadcrumb infrastructure, we only have a single user of the
ring->irq_get and so we will be able to simplify the driver routines
(eliminating the redundant validation and irq refcounting).
Process context is preferred over softirq (or even hardirq) for a couple
of reasons:
- we already utilize process context to have fast wakeup of a single
client (i.e. the client waiting for the GPU inspects the seqno for
itself following an interrupt to avoid the overhead of a context
switch before it returns to userspace)
- engine->irq_seqno() is not suitable for use from an softirq/hardirq
context as we may require long waits (100-250us) to ensure the seqno
write is posted before we read it from the CPU
A signaling framework is a requirement for enabling dma-fences.
v2: Move to a signaling framework based upon the waiter.
v3: Track the first-signal to avoid having to walk the rbtree everytime.
v4: Mark the signaler thread as RT priority to reduce latency in the
indirect wakeups.
v5: Make failure to allocate the thread fatal.
v6: Rename kthreads to i915/signal:%u
Chris Wilson [Fri, 1 Jul 2016 16:23:24 +0000 (17:23 +0100)]
drm/i915: Stop setting wraparound seqno on initialisation
We have testcases to ensure that seqno wraparound works fine, so we can
forgo forcing everyone to encounter seqno wraparound during early
uptime. seqno wraparound incurs a full GPU stall so not forcing it
will eliminate one jitter from the early system. Using the testcases, we
have very deterministic testing which given how difficult it would be to
debug an issue (GPU hang) stemming from a wraparound using pure
postmortem analysis I see no value in forcing a wrap during boot.
Advancing the global next_seqno after a GPU reset is equally pointless.
Chris Wilson [Fri, 1 Jul 2016 16:23:23 +0000 (17:23 +0100)]
drm/i915: Only apply one barrier after a breadcrumb interrupt is posted
If we flag the seqno as potentially stale upon receiving an interrupt,
we can use that information to reduce the frequency that we apply the
heavyweight coherent seqno read (i.e. if we wake up a chain of waiters).
v2: Use cmpxchg to replace READ_ONCE/WRITE_ONCE for more explicit
control of the ordering wrt to interrupt generation and interrupt
checking in the bottom-half.
Chris Wilson [Fri, 1 Jul 2016 16:23:22 +0000 (17:23 +0100)]
drm/i915: Check the CPU cached value in HWS of seqno after waking the waiter
If we have multiple waiters, we may find that many complete on the same
wake up. If we first inspect the seqno from the CPU cache, we may reduce
the number of heavyweight coherent seqno reads we require.
Chris Wilson [Fri, 1 Jul 2016 16:23:21 +0000 (17:23 +0100)]
drm/i915: Add a delay between interrupt and inspecting the final seqno (ilk)
On Ironlake, there is no command nor register to ensure that the write
from a MI_STORE command is completed (and coherent on the CPU) before the
command parser continues. This means that the ordering between the seqno
write and the subsequent user interrupt is undefined (like gen6+). So to
ensure that the seqno write is completed after the final user interrupt
we need to delay the read sufficiently to allow the write to complete.
This delay is undefined by the bspec, and empirically requires 75us even
though a register read combined with a clflush is less than 500ns. Hence,
the delay is due to an on-chip buffer rather than the latency of the write
to memory.
Note that the render ring controls this by filling the PIPE_CONTROL fifo
with stalling commands that force the earliest pipe-control with the
seqno to be completed before the command parser continues. Given that we
need a barrier operation for BSD, we may as well forgo the extra
per-batch latency by using a common per-interrupt barrier.
Studying the impact of adding the usleep shows that in both sequences of
and individual synchronous no-op batches is negligible for the media
engine (where the write now is unordered with the interrupt). Converting
the render engine over from the current glutton of pie-controls over to
the per-interrupt delays speeds up both the sequential and individual
synchronous no-ops by 20% and 60%, respectively. This speed up holds
even when looking at the throughput of small copies (4KiB->4MiB), both
serial and synchronous, by about 20%. This is because despite adding a
significant delay to the interrupt, in all likelihood we will see the
seqno write without having to apply the barrier (only in the rare corner
cases where the write is delayed on the last required is the delay
necessary).
Chris Wilson [Fri, 1 Jul 2016 16:23:20 +0000 (17:23 +0100)]
drm/i915: Refactor scratch object allocation for gen2 w/a buffer
The gen2 w/a buffer is stuffed into the same slot as the gen5+ scratch
buffer. If we pass in the size we want to allocate for the scratch
buffer, both callers can use the same routine.
Chris Wilson [Fri, 1 Jul 2016 16:23:18 +0000 (17:23 +0100)]
drm/i915: Stop mapping the scratch page into CPU space
After the elimination of using the scratch page for Ironlake's
breadcrumb, we no longer need to kmap the object. We therefore can move
it into the high unmappable space and do not need to force the object to
be coherent (i.e. snooped on !llc platforms).
Chris Wilson [Fri, 1 Jul 2016 16:23:17 +0000 (17:23 +0100)]
drm/i915: Use HWS for seqno tracking everywhere
By using the same address for storing the HWS on every platform, we can
remove the platform specific vfuncs and reduce the get-seqno routine to
a single read of a cached memory location.
v2: Fix semaphore_passed() to look at the signaling engine (not the
waiter's)
Chris Wilson [Fri, 1 Jul 2016 16:23:16 +0000 (17:23 +0100)]
drm/i915: Spin after waking up for an interrupt
When waiting for an interrupt (waiting for the engine to complete some
work), we know we are the only waiter to be woken on this engine. We also
know when the GPU has nearly completed our request (or at least started
processing it), so after being woken and we detect that the GPU is
active and working on our request, allow us the bottom-half (the first
waiter who wakes up to handle checking the seqno after the interrupt) to
spin for a very short while to reduce client latencies.
The impact is minimal, there was an improvement to the realtime-vs-many
clients case, but exporting the function proves useful later. However,
it is tempting to adjust irq_seqno_barrier to include the spin. The
problem is first ensuring that the "start-of-request" seqno is coherent
as we use that as our basis for judging when it is ok to spin. If we
could, spinning there could dramatically shorten some sleeps, and allow
us to make the barriers more conservative to handle missed seqno writes
on more platforms (all gen7+ are known to have the occasional issue, at
least).
Chris Wilson [Fri, 1 Jul 2016 16:23:15 +0000 (17:23 +0100)]
drm/i915: Slaughter the thundering i915_wait_request herd
One particularly stressful scenario consists of many independent tasks
all competing for GPU time and waiting upon the results (e.g. realtime
transcoding of many, many streams). One bottleneck in particular is that
each client waits on its own results, but every client is woken up after
every batchbuffer - hence the thunder of hooves as then every client must
do its heavyweight dance to read a coherent seqno to see if it is the
lucky one.
Ideally, we only want one client to wake up after the interrupt and
check its request for completion. Since the requests must retire in
order, we can select the first client on the oldest request to be woken.
Once that client has completed his wait, we can then wake up the
next client and so on. However, all clients then incur latency as every
process in the chain may be delayed for scheduling - this may also then
cause some priority inversion. To reduce the latency, when a client
is added or removed from the list, we scan the tree for completed
seqno and wake up all the completed waiters in parallel.
Using igt/benchmarks/gem_latency, we can demonstrate this effect. The
benchmark measures the number of GPU cycles between completion of a
batch and the client waking up from a call to wait-ioctl. With many
concurrent waiters, with each on a different request, we observe that
the wakeup latency before the patch scales nearly linearly with the
number of waiters (before external factors kick in making the scaling much
worse). After applying the patch, we can see that only the single waiter
for the request is being woken up, providing a constant wakeup latency
for every operation. However, the situation is not quite as rosy for
many waiters on the same request, though to the best of my knowledge this
is much less likely in practice. Here, we can observe that the
concurrent waiters incur extra latency from being woken up by the
solitary bottom-half, rather than directly by the interrupt. This
appears to be scheduler induced (having discounted adverse effects from
having a rbtree walk/erase in the wakeup path), each additional
wake_up_process() costs approximately 1us on big core. Another effect of
performing the secondary wakeups from the first bottom-half is the
incurred delay this imposes on high priority threads - rather than
immediately returning to userspace and leaving the interrupt handler to
wake the others.
To offset the delay incurred with additional waiters on a request, we
could use a hybrid scheme that did a quick read in the interrupt handler
and dequeued all the completed waiters (incurring the overhead in the
interrupt handler, not the best plan either as we then incur GPU
submission latency) but we would still have to wake up the bottom-half
every time to do the heavyweight slow read. Or we could only kick the
waiters on the seqno with the same priority as the current task (i.e. in
the realtime waiter scenario, only it is woken up immediately by the
interrupt and simply queues the next waiter before returning to userspace,
minimising its delay at the expense of the chain, and also reducing
contention on its scheduler runqueue). This is effective at avoid long
pauses in the interrupt handler and at avoiding the extra latency in
realtime/high-priority waiters.
v2: Convert from a kworker per engine into a dedicated kthread for the
bottom-half.
v3: Rename request members and tweak comments.
v4: Use a per-engine spinlock in the breadcrumbs bottom-half.
v5: Fix race in locklessly checking waiter status and kicking the task on
adding a new waiter.
v6: Fix deciding when to force the timer to hide missing interrupts.
v7: Move the bottom-half from the kthread to the first client process.
v8: Reword a few comments
v9: Break the busy loop when the interrupt is unmasked or has fired.
v10: Comments, unnecessary churn, better debugging from Tvrtko
v11: Wake all completed waiters on removing the current bottom-half to
reduce the latency of waking up a herd of clients all waiting on the
same request.
v12: Rearrange missed-interrupt fault injection so that it works with
igt/drv_missed_irq_hang
v13: Rename intel_breadcrumb and friends to intel_wait in preparation
for signal handling.
v14: RCU commentary, assert_spin_locked
v15: Hide BUG_ON behind the compiler; report on gem_latency findings.
v16: Sort seqno-groups by priority so that first-waiter has the highest
task priority (and so avoid priority inversion).
v17: Add waiters to post-mortem GPU hang state.
v18: Return early for a completed wait after acquiring the spinlock.
Avoids adding ourselves to the tree if the is already complete, and
skips the awkward question of why we don't do completion wakeups for
waits earlier than or equal to ourselves.
v19: Prepare for init_breadcrumbs to fail. Later patches may want to
allocate during init, so be prepared to propagate back the error code.
Testcase: igt/gem_concurrent_blit
Testcase: igt/benchmarks/gem_latency Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: "Rogozhkin, Dmitry V" <dmitry.v.rogozhkin@intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Dave Gordon <david.s.gordon@intel.com> Cc: "Goel, Akash" <akash.goel@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> #v18 Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-6-git-send-email-chris@chris-wilson.co.uk
Chris Wilson [Fri, 1 Jul 2016 16:23:14 +0000 (17:23 +0100)]
drm/i915: Separate GPU hang waitqueue from advance
Currently __i915_wait_request uses a per-engine wait_queue_t for the dual
purpose of waking after the GPU advances or for waking after an error.
In the future, we may add even more wake sources and require greater
separation, but for now we can conceptually simplify wakeups by separating
the two sources. In particular, this allows us to use different wait-queues
(e.g. one on the engine advancement, a global one for errors and one on
each requests) without any hassle.