The prep to remove the flushing list in
commit
cc889e0f6ce6a63c62db17d702ecfed86d58083f
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Wed Jun 13 20:45:19 2012 +0200
drm/i915: disable flushing_list/gpu_write_list
causes quite some decent regressions. We can fix this by setting the
CS_STALL bit to ensure that the following seqno write happens only
after the cache flush has completed. But only do that when the caller
actually wants the flush (and not also when we invalidate caches
before starting the next batch).
I've looked through all our ancient scrolls about gen6+ pipe control
workarounds, and this seems to be indeed a legal combination: We're
allowed to set the CS_STALL bit when we flush the render cache (which
we do).
While yelling at this code, also pass back the return value from
intel_emit_post_sync_nonzero_flush properly.
v2: Instead of emitting more pipe controls, set the CS_STALL bit on
the write flush as suggested by Chris Wilson. It seems to work, too.
Cc: Eric Anholt <eric@anholt.net>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=51436
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=51429
Tested-by: Lu Hua <huax.lu@intel.com>
Tested-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
int ret;
/* Force SNB workarounds for PIPE_CONTROL flushes */
- intel_emit_post_sync_nonzero_flush(ring);
+ ret = intel_emit_post_sync_nonzero_flush(ring);
+ if (ret)
+ return ret;
/* Just flush everything. Experiments have shown that reducing the
* number of bits based on the write domains has little performance
flags |= PIPE_CONTROL_VF_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_CONST_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_STATE_CACHE_INVALIDATE;
+ /*
+ * Ensure that any following seqno writes only happen when the render
+ * cache is indeed flushed (but only if the caller actually wants that).
+ */
+ if (flush_domains)
+ flags |= PIPE_CONTROL_CS_STALL;
ret = intel_ring_begin(ring, 6);
if (ret)