smp_mb() inside bnx2_tx_avail() is used twice in the normal
bnx2_start_xmit() path (see illustration below). The full memory
barrier is only necessary during race conditions with tx completion.
We can speed up the tx path by replacing smp_mb() in bnx2_tx_avail()
with a compiler barrier. The compiler barrier is to force the
compiler to fetch the tx_prod and tx_cons from memory.
In the race condition between bnx2_start_xmit() and bnx2_tx_int(),
we have the following situation:
bnx2_start_xmit() bnx2_tx_int()
if (!bnx2_tx_avail())
BUG();
...
if (!bnx2_tx_avail())
netif_tx_stop_queue(); update_tx_index();
smp_mb(); smp_mb();
if (bnx2_tx_avail()) if (netif_tx_queue_stopped() &&
netif_tx_wake_queue(); bnx2_tx_avail())
With smp_mb() removed from bnx2_tx_avail(), we need to add smp_mb() to
bnx2_start_xmit() as shown above to properly order netif_tx_stop_queue()
and bnx2_tx_avail() to check the ring index. If it is not strictly
ordered, the tx queue can be stopped forever.
This improves performance by about 5% with 2 ports running bi-directional
64-byte packets.
Reviewed-by: Benjamin Li <benli@broadcom.com>
Reviewed-by: Matt Carlson <mcarlson@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
{
u32 diff;
- smp_mb();
+ /* Tell compiler to fetch tx_prod and tx_cons from memory. */
+ barrier();
/* The ring uses 256 indices for 255 entries, one of them
* needs to be skipped.
if (unlikely(bnx2_tx_avail(bp, txr) <= MAX_SKB_FRAGS)) {
netif_tx_stop_queue(txq);
+
+ /* netif_tx_stop_queue() must be done before checking
+ * tx index in bnx2_tx_avail() below, because in
+ * bnx2_tx_int(), we update tx index before checking for
+ * netif_tx_queue_stopped().
+ */
+ smp_mb();
if (bnx2_tx_avail(bp, txr) > bp->tx_wake_thresh)
netif_tx_wake_queue(txq);
}