commit
30f89ca021c3e584b61bc5a14eede89f74b2e826 upstream.
When an endpoint ring is freed, it is either cached in a per-device ring
cache, or simply freed if the ring cache is full. If the ring was added
to the cache, then virt_dev->num_rings_cached is incremented. The cache
is designed to hold up to 31 endpoint rings, in array indexes 0 to 30.
When the device is freed (when the slot was disabled),
xhci_free_virt_device() is called, it would free the cached rings in
array indexes 0 to virt_dev->num_rings_cached.
Unfortunately, the original code in xhci_free_or_cache_endpoint_ring()
would put the first entry into the ring cache in array index 1, instead of
array index 0. This was caused by the second assignment to rings_cached:
rings_cached = virt_dev->num_rings_cached;
if (rings_cached < XHCI_MAX_RINGS_CACHED) {
virt_dev->num_rings_cached++;
rings_cached = virt_dev->num_rings_cached;
virt_dev->ring_cache[rings_cached] =
virt_dev->eps[ep_index].ring;
This meant that when the device was freed, cached rings with indexes 0 to
N would be freed, and the last cached ring in index N+1 would not be
freed. When the driver was unloaded, this caused interesting messages
like:
xhci_hcd 0000:06:00.0: dma_pool_destroy xHCI ring segments,
ffff880063040000 busy
This should be queued to stable kernels back to 2.6.33.
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
rings_cached = virt_dev->num_rings_cached;
if (rings_cached < XHCI_MAX_RINGS_CACHED) {
- virt_dev->num_rings_cached++;
- rings_cached = virt_dev->num_rings_cached;
virt_dev->ring_cache[rings_cached] =
virt_dev->eps[ep_index].ring;
+ virt_dev->num_rings_cached++;
xhci_dbg(xhci, "Cached old ring, "
"%d ring%s cached\n",
- rings_cached,
- (rings_cached > 1) ? "s" : "");
+ virt_dev->num_rings_cached,
+ (virt_dev->num_rings_cached > 1) ? "s" : "");
} else {
xhci_ring_free(xhci, virt_dev->eps[ep_index].ring);
xhci_dbg(xhci, "Ring cache full (%d rings), "