From b9ce3647b4901071b0dd35d62954a4bb0e5ba1d1 Mon Sep 17 00:00:00 2001 From: Ding Tianhong Date: Thu, 21 Jan 2016 17:29:17 +0800 Subject: [PATCH] locking/mutexes: don't spin on owner when wait list is not NULL. I build a script to create several process for ioctl loop calling, the ioctl will calling the kernel function just like: xx_ioctl { ... rtnl_lock(); function(); rtnl_unlock(); ... } The function may sleep several ms, but will not halt, at the same time another user service may calling ifconfig to change the state of the ethernet, and after several hours, the hung task thread report this problem: ======================================================================== 149738.039038] INFO: task ifconfig:11890 blocked for more than 120 seconds. [149738.040597] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [149738.042280] ifconfig D ffff88061ec13680 0 11890 11573 0x00000080 [149738.042284] ffff88052449bd40 0000000000000082 ffff88053a33f300 ffff88052449bfd8 [149738.042286] ffff88052449bfd8 ffff88052449bfd8 ffff88053a33f300 ffffffff819e6240 [149738.042288] ffffffff819e6244 ffff88053a33f300 00000000ffffffff ffffffff819e6248 [149738.042290] Call Trace: [149738.042300] [] schedule_preempt_disabled+0x29/0x70 [149738.042303] [] __mutex_lock_slowpath+0xc5/0x1c0 [149738.042305] [] mutex_lock+0x1f/0x2f [149738.042309] [] rtnl_lock+0x15/0x20 [149738.042311] [] dev_ioctl+0xda/0x590 [149738.042314] [] ? __do_page_fault+0x21c/0x560 [149738.042318] [] sock_do_ioctl+0x45/0x50 [149738.042320] [] sock_ioctl+0x1f0/0x2c0 [149738.042324] [] do_vfs_ioctl+0x2e5/0x4c0 [149738.042327] [] ? fget_light+0xa0/0xd0 ================================ cut here ================================ I got the vmcore and found that the ifconfig is already in the wait_list of the rtnl_lock for 120 second, but my process could get and release the rtnl_lock normally several times in one second, so it means that my process jump the queue and the ifconfig couldn't get the rtnl all the time, I check the mutex lock slow path and found that the mutex may spin on owner ignore whether the wait list is empty, it will cause the task in the wait list always be cut in line, so add test for wait list in the mutex_can_spin_on_owner and avoid this problem. Signed-off-by: Ding Tianhong Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Davidlohr Bueso Cc: Linus Torvalds Cc: Paul E. McKenney Cc: Thomas Gleixner Cc: Will Deacon Cc: Jason Low Cc: Tim Chen Cc: Waiman Long Signed-off-by: Paul E. McKenney --- kernel/locking/mutex.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 0551c219c40e..596b3416b971 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -256,7 +256,7 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock) struct task_struct *owner; int retval = 1; - if (need_resched()) + if (need_resched() || atomic_read(&lock->count) == -1) return 0; rcu_read_lock(); @@ -283,10 +283,11 @@ static inline bool mutex_try_to_acquire(struct mutex *lock) /* * Optimistic spinning. * - * We try to spin for acquisition when we find that the lock owner - * is currently running on a (different) CPU and while we don't - * need to reschedule. The rationale is that if the lock owner is - * running, it is likely to release the lock soon. + * We try to spin for acquisition when we find that there are no + * pending waiters and the lock owner is currently running on a + * (different) CPU and while we don't need to reschedule. The + * rationale is that if the lock owner is running, it is likely + * to release the lock soon. * * Since this needs the lock owner, and this mutex implementation * doesn't track the owner atomically in the lock field, we need to -- 2.39.5