Kent Overstreet [Wed, 20 Mar 2013 04:08:58 +0000 (15:08 +1100)]
aio: give shared kioctx fields their own cachelines
[akpm@linux-foundation.org: make reqs_active __cacheline_aligned_in_smp] Signed-off-by: Kent Overstreet <koverstreet@google.com> Cc: Zach Brown <zab@redhat.com> Cc: Felipe Balbi <balbi@ti.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Benjamin LaHaise <bcrl@kvack.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kent Overstreet [Wed, 20 Mar 2013 04:08:57 +0000 (15:08 +1100)]
aio: kill struct aio_ring_info
struct aio_ring_info was kind of odd, the only place it's used is where
it's embedded in struct kioctx - there's no real need for it.
The next patch rearranges struct kioctx and puts various things on their
own cachelines - getting rid of struct aio_ring_info now makes that
reordering a bit clearer.
Signed-off-by: Kent Overstreet <koverstreet@google.com> Cc: Zach Brown <zab@redhat.com> Cc: Felipe Balbi <balbi@ti.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Benjamin LaHaise <bcrl@kvack.org> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kent Overstreet [Wed, 20 Mar 2013 04:08:57 +0000 (15:08 +1100)]
aio: kill batch allocation
Previously, allocating a kiocb required touching quite a few global (well,
per kioctx) cachelines... so batching up allocation to amortize those was
worthwhile. But we've gotten rid of some of those, and in another couple
of patches kiocb allocation won't require writing to any shared
cachelines, so that means we can just rip this code out.
Signed-off-by: Kent Overstreet <koverstreet@google.com> Cc: Zach Brown <zab@redhat.com> Cc: Felipe Balbi <balbi@ti.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Benjamin LaHaise <bcrl@kvack.org> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kent Overstreet [Wed, 20 Mar 2013 04:08:57 +0000 (15:08 +1100)]
aio: change reqs_active to include unreaped completions
The aio code tries really hard to avoid having to deal with the completion
ringbuffer overflowing. To do that, it has to keep track of the number of
outstanding kiocbs, and the number of completions currently in the
ringbuffer - and it's got to check that every time we allocate a kiocb.
Ouch.
But - we can improve this quite a bit if we just change reqs_active to
mean "number of outstanding requests and unreaped completions" - that
means kiocb allocation doesn't have to look at the ringbuffer, which is a
fairly significant win.
Signed-off-by: Kent Overstreet <koverstreet@google.com> Cc: Zach Brown <zab@redhat.com> Cc: Felipe Balbi <balbi@ti.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Benjamin LaHaise <bcrl@kvack.org> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kent Overstreet [Wed, 20 Mar 2013 04:08:57 +0000 (15:08 +1100)]
aio: use cancellation list lazily
Cancelling kiocbs requires adding them to a per kioctx linked list, which
is one of the few things we need to take the kioctx lock for in the fast
path. But most kiocbs can't be cancelled - so if we just do this lazily,
we can avoid quite a bit of locking overhead.
While we're at it, instead of using a flag bit switch to using ki_cancel
itself to indicate that a kiocb has been cancelled/completed. This lets
us get rid of ki_flags entirely.
[akpm@linux-foundation.org: remove buggy BUG()] Signed-off-by: Kent Overstreet <koverstreet@google.com> Cc: Zach Brown <zab@redhat.com> Cc: Felipe Balbi <balbi@ti.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Benjamin LaHaise <bcrl@kvack.org> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kent Overstreet [Wed, 20 Mar 2013 04:08:56 +0000 (15:08 +1100)]
aio: use flush_dcache_page()
Signed-off-by: Kent Overstreet <koverstreet@google.com> Cc: Zach Brown <zab@redhat.com> Cc: Felipe Balbi <balbi@ti.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Benjamin LaHaise <bcrl@kvack.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kent Overstreet [Wed, 20 Mar 2013 04:08:56 +0000 (15:08 +1100)]
aio: make aio_read_evt() more efficient, convert to hrtimers
Previously, aio_read_event() pulled a single completion off the ringbuffer
at a time, locking and unlocking each time. Change it to pull off as many
events as it can at a time, and copy them directly to userspace.
This also fixes a bug where if copying the event to userspace failed,
we'd lose the event.
Also convert it to wait_event_interruptible_hrtimeout(), which
simplifies it quite a bit.
Signed-off-by: Kent Overstreet <koverstreet@google.com> Cc: Zach Brown <zab@redhat.com> Cc: Felipe Balbi <balbi@ti.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Benjamin LaHaise <bcrl@kvack.org> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kent Overstreet [Wed, 20 Mar 2013 04:08:56 +0000 (15:08 +1100)]
wait: add wait_event_hrtimeout()
Analagous to wait_event_timeout() and friends, this adds
wait_event_hrtimeout() and wait_event_interruptible_hrtimeout().
Note that unlike the versions that use regular timers, these don't return
the amount of time remaining when they return - instead, they return 0 or
-ETIME if they timed out. because I was uncomfortable with the semantics
of doing it the other way (that I could get it right, anyways).
If the timer expires, there's no real guarantee that expire_time -
current_time would be <= 0 - due to timer slack certainly, and I'm not
sure I want to know the implications of the different clock bases in
hrtimers.
If the timer does expire and the code calculates that the time remaining
is nonnegative, that could be even worse if the calling code then reuses
that timeout. Probably safer to just return 0 then, but I could imagine
weird bugs or at least unintended behaviour arising from that too.
I came to the conclusion that if other users end up actually needing the
amount of time remaining, the sanest thing to do would be to create a
version that uses absolute timeouts instead of relative.
[akpm@linux-foundation.org: fix description of `timeout' arg] Signed-off-by: Kent Overstreet <koverstreet@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Zach Brown <zab@redhat.com> Cc: Felipe Balbi <balbi@ti.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Benjamin LaHaise <bcrl@kvack.org> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kent Overstreet [Wed, 20 Mar 2013 04:08:55 +0000 (15:08 +1100)]
aio: refcounting cleanup
The usage of ctx->dead was fubar - it makes no sense to explicitly check
it all over the place, especially when we're already using RCU.
Now, ctx->dead only indicates whether we've dropped the initial
refcount. The new teardown sequence is:
set ctx->dead
hlist_del_rcu();
synchronize_rcu();
Now we know no system calls can take a new ref, and it's safe to drop
the initial ref:
put_ioctx();
We also need to ensure there are no more outstanding kiocbs. This was
done incorrectly - it was being done in kill_ctx(), and before dropping
the initial refcount. At this point, other syscalls may still be
submitting kiocbs!
Now, we cancel and wait for outstanding kiocbs in free_ioctx(), after
kioctx->users has dropped to 0 and we know no more iocbs could be
submitted.
Signed-off-by: Kent Overstreet <koverstreet@google.com> Cc: Zach Brown <zab@redhat.com> Cc: Felipe Balbi <balbi@ti.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Benjamin LaHaise <bcrl@kvack.org> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kent Overstreet [Wed, 20 Mar 2013 04:08:55 +0000 (15:08 +1100)]
aio: make aio_put_req() lockless
Freeing a kiocb needed to touch the kioctx for three things:
* Pull it off the reqs_active list
* Decrementing reqs_active
* Issuing a wakeup, if the kioctx was in the process of being freed.
This patch moves these to aio_complete(), for a couple reasons:
* aio_complete() already has to issue the wakeup, so if we drop the
kioctx refcount before aio_complete does its wakeup we don't have to
do it twice.
* aio_complete currently has to take the kioctx lock, so it makes sense
for it to pull the kiocb off the reqs_active list too.
* A later patch is going to change reqs_active to include unreaped
completions - this will mean allocating a kiocb doesn't have to look
at the ringbuffer. So taking the decrement of reqs_active out of
kiocb_free() is useful prep work for that patch.
This doesn't really affect cancellation, since existing (usb) code that
implements a cancel function still calls aio_complete() - we just have
to make sure that aio_complete does the necessary teardown for cancelled
kiocbs.
It does affect code paths where we free kiocbs that were never
submitted; they need to decrement reqs_active and pull the kiocb off the
reqs_active list. This occurs in two places: kiocb_batch_free(), which
is going away in a later patch, and the error path in io_submit_one.
Signed-off-by: Kent Overstreet <koverstreet@google.com> Cc: Zach Brown <zab@redhat.com> Cc: Felipe Balbi <balbi@ti.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Benjamin LaHaise <bcrl@kvack.org> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kent Overstreet [Wed, 20 Mar 2013 04:08:55 +0000 (15:08 +1100)]
aio: do fget() after aio_get_req()
aio_get_req() will fail if we have the maximum number of requests
outstanding, which depending on the application may not be uncommon. So
avoid doing an unnecessary fget().
Signed-off-by: Kent Overstreet <koverstreet@google.com> Cc: Zach Brown <zab@redhat.com> Cc: Felipe Balbi <balbi@ti.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Benjamin LaHaise <bcrl@kvack.org> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Zach Brown [Wed, 20 Mar 2013 04:08:53 +0000 (15:08 +1100)]
aio: remove retry-based AIO
This removes the retry-based AIO infrastructure now that nothing in tree
is using it.
We want to remove retry-based AIO because it is fundemantally unsafe. It
retries IO submission from a kernel thread that has only assumed the mm of
the submitting task. All other task_struct references in the IO
submission path will see the kernel thread, not the submitting task. This
design flaw means that nothing of any meaningful complexity can use
retry-based AIO.
This removes all the code and data associated with the retry machinery.
The most significant benefit of this is the removal of the locking around
the unused run list in the submission path.
This has only been compiled.
Signed-off-by: Kent Overstreet <koverstreet@google.com> Signed-off-by: Zach Brown <zab@redhat.com> Cc: Zach Brown <zab@redhat.com> Cc: Felipe Balbi <balbi@ti.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Benjamin LaHaise <bcrl@kvack.org> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Zach Brown [Wed, 20 Mar 2013 04:08:53 +0000 (15:08 +1100)]
gadget: remove only user of aio retry
This removes the only in-tree user of aio retry. This will let us remove
the retry code from the aio core.
Removing retry is relatively easy as the USB gadget wasn't using it to
retry IOs at all. It always fully submitted the IO in the context of the
initial io_submit() call. It only used the AIO retry facility to get the
submitter's mm context for copying the result of a read back to user
space. This is easy to implement with use_mm() and a work struct, much
like kvm does with async_pf_execute() for get_user_pages().
Signed-off-by: Zach Brown <zab@redhat.com> Signed-off-by: Kent Overstreet <koverstreet@google.com> Cc: Felipe Balbi <balbi@ti.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Benjamin LaHaise <bcrl@kvack.org> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Zach Brown [Wed, 20 Mar 2013 04:08:52 +0000 (15:08 +1100)]
mm: remove old aio use_mm() comment
use_mm() is used in more places than just aio. There's no need to mention
callers when describing the function.
Signed-off-by: Zach Brown <zab@redhat.com> Signed-off-by: Kent Overstreet <koverstreet@google.com> Cc: Felipe Balbi <balbi@ti.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Benjamin LaHaise <bcrl@kvack.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Akinobu Mita [Wed, 20 Mar 2013 04:08:50 +0000 (15:08 +1100)]
net: rename random32 to prandom
Commit 496f2f93b1cc286f5a4f4f9acdc1e5314978683f ("random32: rename
random32 to prandom") renamed random32() and srandom32() to prandom_u32()
and prandom_seed() respectively.
net_random() and net_srandom() need to be redefined with prandom_* in
order to finish the naming transition.
While I'm at it, enclose macro argument of net_srandom() with parenthesis.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Akinobu Mita [Wed, 20 Mar 2013 04:08:50 +0000 (15:08 +1100)]
net/core: remove duplicate statements by do-while loop
Remove duplicate statements by using do-while loop instead of while loop.
- A;
- while (e) {
+ do {
A;
- }
+ } while (e);
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Akinobu Mita [Wed, 20 Mar 2013 04:08:49 +0000 (15:08 +1100)]
net/core: rename random32() to prandom_u32()
Use preferable function name which implies using a pseudo-random
number generator.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Akinobu Mita [Wed, 20 Mar 2013 04:08:49 +0000 (15:08 +1100)]
net/netfilter: rename random32() to prandom_u32()
Use preferable function name which implies using a pseudo-random
number generator.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: Pablo Neira Ayuso <pablo@netfilter.org> Cc: Patrick McHardy <kaber@trash.net> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Akinobu Mita [Wed, 20 Mar 2013 04:08:49 +0000 (15:08 +1100)]
net/sched: rename random32() to prandom_u32()
Use preferable function name which implies using a pseudo-random
number generator.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: Stephen Hemminger <shemminger@vyatta.com> Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Akinobu Mita [Wed, 20 Mar 2013 04:08:47 +0000 (15:08 +1100)]
scsi: rename random32() to prandom_u32()
Use preferable function name which implies using a pseudo-random
number generator.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: "James E.J. Bottomley" <JBottomley@parallels.com> Cc: Robert Love <robert.w.love@intel.com> Cc: James Smart <james.smart@emulex.com> Cc: Andrew Vasquez <andrew.vasquez@qlogic.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Akinobu Mita [Wed, 20 Mar 2013 04:08:47 +0000 (15:08 +1100)]
lguest: rename random32() to prandom_u32()
Use preferable function name which implies using a pseudo-random
number generator.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Acked-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Akinobu Mita [Wed, 20 Mar 2013 04:08:46 +0000 (15:08 +1100)]
infiniband: rename random32() to prandom_u32()
Use preferable function name which implies using a pseudo-random
number generator.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Reviewed-by: Steve Wise <swise@opengridcomputing.com> Cc: Roland Dreier <roland@kernel.org> Cc: Sean Hefty <sean.hefty@intel.com> Cc: Hal Rosenstock <hal.rosenstock@gmail.com> Reviewed-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Akinobu Mita [Wed, 20 Mar 2013 04:08:45 +0000 (15:08 +1100)]
x86: rename random32() to prandom_u32()
Use preferable function name which implies using a pseudo-random
number generator.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Acked-by: H. Peter Anvin <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Akinobu Mita [Wed, 20 Mar 2013 04:08:44 +0000 (15:08 +1100)]
x86: pageattr-test: remove srandom32 call
pageattr-test calls srandom32() once every test iteration. But calling
srandom32() after late_initcalls is not meaningfull. Because the random
states for random32() is mixed by good random numbers in late_initcall
prandom_reseed().
So this removes the call to srandom32().
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Acked-by: H. Peter Anvin <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Akinobu Mita [Wed, 20 Mar 2013 04:08:44 +0000 (15:08 +1100)]
raid6test: use prandom_bytes()
Use prandom_bytes() to generate random bytes for test data.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: Dan Williams <djbw@fb.com> Cc: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
kernel/pid.c: improve flow of a loop inside alloc_pidmap.
find_next_offset() searches for an available "cleaned bit" in the
respective pid bitmap (page), so returns the offset if found, otherwise it
returns a value equals to BITS_PER_PAGE.
For example, suppose find_next_offset didn't find any available bit, so
there's no purpose to call mk_pid (Wasteful Cpu Cycles).
Therefore, I found it could be better to call mk_pid after the checking
(offset < BITS_PER_PAGE) returned sucessfully! Another point: If (offset
< BITS_PER_PAGE) results in a "failure", then mk_pid would be called again
afterwards.
Signed-off-by: Raphael S. Carvalho <raphael.scarv@gmail.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Serge Hallyn <serge.hallyn@canonical.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Jingoo Han [Wed, 20 Mar 2013 04:08:42 +0000 (15:08 +1100)]
drivers/char/hw_random/exynos-rng.c: add CONFIG_PM_SLEEP to suspend/resume functions
Add CONFIG_PM_SLEEP to suspend/resume functions to fix the following build
warning when CONFIG_PM_SLEEP is not selected.
drivers/char/hw_random/exynos-rng.c:147:12: warning: 'exynos_rng_runtime_suspend' defined but not used [-Wunused-function]
drivers/char/hw_random/exynos-rng.c:157:12: warning: 'exynos_rng_runtime_resume' defined but not used [-Wunused-function]
Signed-off-by: Jingoo Han <jg1.han@samsung.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Manfred Spraul [Wed, 20 Mar 2013 04:08:42 +0000 (15:08 +1100)]
ipc/sem.c: alternatives to preempt_disable()
ipc/sem.c uses a custom wakeup scheme that relies on preempt_disable().
On -RT, this causes increased latencies and debug warnings.
The patch adds two additional schemes:
- one built around a completion - could be better for -RT kernels
- one built around a spinlock - unfortunately it's broken
- and the current one
My preferred solution would be the spinlock implementation: RT would use
premptible spinlocks, mainline normal spinlocks. Thus both get the
optimal implementation without any special code in ipc/sem.c.
Unfortunately, I don't see how it could be fixed.
Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mike Galbraith <efault@gmx.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Peter Hurley [Wed, 20 Mar 2013 04:08:41 +0000 (15:08 +1100)]
ipc: refactor msg list search into separate function
Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Peter Hurley [Wed, 20 Mar 2013 04:08:41 +0000 (15:08 +1100)]
ipc: simplify msg list search
Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Peter Hurley [Wed, 20 Mar 2013 04:08:40 +0000 (15:08 +1100)]
ipc: implement MSG_COPY as a new receive mode
Teach the helper routines about MSG_COPY so that msgtyp is preserved as
the message number to copy.
The security functions affected by this change were audited and no
additional changes are necessary.
Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Peter Hurley [Wed, 20 Mar 2013 04:08:40 +0000 (15:08 +1100)]
ipc: remove msg handling from queue scan
In preparation for refactoring the queue scan into a separate
function, relocate msg copying.
Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Peter Hurley [Wed, 20 Mar 2013 04:08:40 +0000 (15:08 +1100)]
ipc: set EFAULT as default error in load_msg()
Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Peter Hurley [Wed, 20 Mar 2013 04:08:40 +0000 (15:08 +1100)]
ipc: tighten msg copy loops
Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Peter Hurley [Wed, 20 Mar 2013 04:08:39 +0000 (15:08 +1100)]
ipc: clamp with min()
Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:39 +0000 (15:08 +1100)]
vmcore: introduce mmap_vmcore()
This patch introduces mmap_vmcore().
If flag MEM_TYPE_CURRENT_KERNEL is set, remapped is the buffer on the 2nd
kernel. If not set, remapped is some area in old memory.
Neither writable nor executable mapping is permitted even with mprotect().
Non-writable mapping is also requirement of remap_pfn_range() when
mapping linear pages on non-consequtive physical pages; see
is_cow_mapping().
On x86-32 PAE kernels, mmap() supports at most 16TB memory only. This
limitation comes from the fact that the third argument of
remap_pfn_range(), pfn, is of 32-bit length on x86-32: unsigned long.
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:38 +0000 (15:08 +1100)]
vmcore: round-up offset of vmcore object in page-size boundary
To satisfy mmap()'s page-size bounary requirement, round-up offset of each
vmcore objects in page-size boundary; each offset is connected to
user-space virtual address through mapping of mmap().
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:38 +0000 (15:08 +1100)]
vmcore: check if vmcore objects satify mmap()'s page-size boundary requirement
If there's some vmcore object that doesn't satisfy page-size boundary
requirement, remap_pfn_range() fails to remap it to user-space.
Objects that possibly don't satisfy the requirement are ELF note segments
only. The memory chunks corresponding to PT_LOAD entries are guaranteed
to satisfy page-size boundary requirement by the copy from old memory to
buffer in 2nd kernel done in later patch.
This patch doesn't copy each note segment into the 2nd kernel since they
amount to so large in total if there are multiple CPUs. For example,
current maximum number of CPUs in x86_64 is 5120, where note segments
exceed 1MB with NT_PRSTATUS only.
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:38 +0000 (15:08 +1100)]
vmcore: check NT_VMCORE_PAD as a mark indicating the end of ELF note buffer
Modern kernel marks the end of ELF note buffer with NT_VMCORE_PAD type
note in order to make the buffer satisfy mmap()'s page-size boundary
requirement. This patch makes finishing reading each buffer if the note
type now being read is NT_VMCORE_PAD type.
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:37 +0000 (15:08 +1100)]
kexec: fill note buffers by NT_VMCORE_PAD notes in page-size boundary
Fill both crash_notes and vmcoreinfo_note buffers by NT_VMCORE_PAD note
type to make them satisfy mmap()'s page-size boundary requirement.
So far, end of note segments has been marked by zero-filled elf header.
Instead, this patch writes NT_VMCORE_PAD note in the end of note segments
until the offset on page-size boundary.
Also, old kernel can treat the ELF segments created without null header
because it stops reading ELF segments if real size it reads reachs
p_memsz.
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:37 +0000 (15:08 +1100)]
elf: introduce NT_VMCORE_PAD type
The NT_VMCORE_PAD type is introduced to make both crash_notes buffer and
vmcoreinfo_note buffer satisfy mmap()'s page-size boundary requirement by
filling them with this note type.
The purpose of this type is just to align the buffer in page-size
boundary; it has no meaning in contents, which are fully filled with zero.
This note type belongs to "VMCOREINFO" name space and the type in this
name space is 7. The reason why the numbers from 1 to 5 is not chosen is
that for the ones from 1 to 4, there are the corresponding note types
using the same number in "CORE" name space, and crash utility and
makedumpfile don't distinguish note types by name space at all; for the
remaining 5, this has somehow not been used since v2.4.0 kernel despite
the fact that NT_AUXV is defined as 6. It looks that it avoids some
dependency to 5. Here simply 5 is not chosen for conservative viewpoint.
By this change, gdb and binutils work well without any change, but
makedumpfile and crash utility need their changes to distinguish two note
types in "VMCOREINFO" name space.
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:37 +0000 (15:08 +1100)]
kexec, elf: introduce NT_VMCORE_DEBUGINFO note type
This patch introduces NT_VMCORE_DEBUGINFO to a unique note type in
VMCOREINFO name, which has had no name so far. The name means that it's a
kind of note type in vmcoreinfo that contains system kernel's debug
information.
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:36 +0000 (15:08 +1100)]
vmcore: allocate per-cpu crash_notes objects on page-size boundary
To satisfy mmap()'s page-size boundary requirement, allocate per-cpu
crash_notes objects on page-size boundary.
/proc/vmcore on the 2nd kernel checks if each note objects is allocated on
page-size boundary. If there's some object not satisfying the page-size
boundary requirement, /proc/vmcore doesn't provide mmap() interface.
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:36 +0000 (15:08 +1100)]
vmcore: read buffers for vmcore objects copied from old memory
If flag MEM_TYPE_CURRENT_KERNEL is set, the object is copied in the buffer
on the 2nd kernel, then read_vmcore() reads the buffer. If the flag is
not set, read_vmcore() reads old memory as usual.
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:36 +0000 (15:08 +1100)]
vmcore: clean up read_vmcore()
Clean up read_vmcore(). Part for objects in vmcore_list can be written
uniformly to part for ELF headers. By this change, duplicate and
complicated codes are removed, so it's more clear to see what's done
there.
Also, by this change, map_offset_to_paddr() is no longer used. Remove it.
and the first one is kept in old memory and the 2nd one is copied into
buffer on 2nd kernel.
This kind of non-page-size-aligned area can always occur since any part of
System RAM can be converted into reserved area at runtime.
If not doing copying like this and if remapping non page-size aligned
pages on old memory directly, mmap() had to export memory which is not
dump target to user-space. In the above example this is reserved
0x9f800-0xa0000.
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:35 +0000 (15:08 +1100)]
vmcore, procfs: introduce a flag to distinguish objects copied in 2nd kernel
The part of dump target memory is copied into the 2nd kernel if it doesn't
satisfy mmap()'s page-size boundary requirement. To distinguish such
copied object from usual old memory, a flag MEM_TYPE_CURRENT_KERNEL is
introduced. If this flag is set, the object is considered to have been
copied into a buffer on the 2nd kernel.
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:34 +0000 (15:08 +1100)]
vmcore: round up buffer size of ELF headers by PAGE_SIZE
To satisfy mmap() page-size boundary requirement, round up buffer size of
ELF headers by PAGE_SIZE. The resulting value becomes offset of ELF note
segments and it's assigned in unique PT_NOTE program header entry.
Also, some part that assumes past ELF headers' size is replaced by this
new rounded-up value.
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:34 +0000 (15:08 +1100)]
vmcore: allocate buffer for ELF headers on page-size alignment
Allocate buffer for ELF headers on page-size aligned boudary to satisfy
mmap() requirement. For this, __get_free_pages() is used instead of
kmalloc().
Also, later patch will decrease actually used buffer size for ELF headers,
so it's necessary to keep original buffer size and actually used buffer
size separately. elfcorebuf_sz_orig keeps the original one and
elfcorebuf_sz the actually used one.
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:34 +0000 (15:08 +1100)]
vmcore, sysfs: export ELF note segment size instead of vmcoreinfo data size
Currently, vmcoreinfo exports data part only, but kexec-tool sets it in
p_memsz member as a whole ELF note segment size. Due to this, it would be
no problem on the current ELF note segment size, but if it grows in the
future, then read possibly doesn't reach ELF note header in larger p_memsz
position, failing to read a whole ELF segment.
Note: kexec-tools assigns PAGE_SIZE to p_memsz for other ELF note types.
Due to the above reason, the same issue occurs if actual ELF note data
exceeds (PAGE_SIZE - 2 * KEXEC_NOTE_HEAD_BYTES).
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:34 +0000 (15:08 +1100)]
vmcore: rearrange program headers without assuming consecutive PT_NOTE entries
Current code assumes all PT_NOTE headers are placed at the beginning of
program header table and they are consecutive. But the assumption could
be broken by future changes on either kexec-tools or the 1st kernel. This
patch removes the assumption and rearranges program headers as the
following conditions are satisfied:
- PT_NOTE entry is unique at the first entry,
- the order of program headers are unchanged during this
rearrangement, only their positions are changed in positive
direction.
- unused part that occurs in the bottom of program headers are filled
with 0.
Also, this patch adds one exceptional case where the number of PT_NOTE
entries is somehow 0. Then, immediately go out of the function.
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:33 +0000 (15:08 +1100)]
vmcore: clean up by removing unnecessary variable
The variable j has int type but it's compared with u64 type.
Also, the purpose of the variable j is exactly what the variable real_sz
is used for now. Replace the variable j by the variable real_sz and
remove the variable j.
HATAYAMA Daisuke [Wed, 20 Mar 2013 04:08:33 +0000 (15:08 +1100)]
vmcore: reference e_phoff member explicitly to get position of program header table
Currently, read to /proc/vmcore is done by read_oldmem() that uses
ioremap/iounmap per a single page. For example, if memory is 1GB,
ioremap/iounmap is called (1GB / 4KB)-times, that is, 262144 times. This
causes big performance degradation.
In particular, the current main user of this mmap() is makedumpfile, which
not only reads memory from /proc/vmcore but also does other processing
like filtering, compression and IO work. Update of page table and the
following TLB flush makes such processing much slow; though I have yet to
make patch for makedumpfile and yet to confirm how it's improved.
To address the issue, this patch implements mmap() on /proc/vmcore to
improve read performance. My simple benchmark shows the improvement from
200 [MiB/sec] to over 50.0 [GiB/sec].
This patch:
Currently, the code assumes that position of program header table is next
to ELF header. But future change can break the assumption on kexec-tools
and the 1st kernel. To avoid worst case, reference e_phoff member
explicitly to get position of program header table in file-offset.
Nathan Zimmer [Wed, 20 Mar 2013 04:08:31 +0000 (15:08 +1100)]
procfs: improve scaling in proc
I am currently tracking a hotlock reported by a customer on a large
system, 512 cores. I am currently running 3.8-rc7 but the issue looks
like it has been this way for a very long time. The offending lock is
proc_dir_entry->pde_unload_lock.
This patch converts the lock to use rcu. However the pde_openers list
still is controlled by a spin lock. I tested on a 4096 machine and the
lock doesn't seem hot at least according to perf.
This is a refresh of what was orignally suggested by Eric Dumazet some
time ago. I have also taken in some comments from Andrew and several
other people whose names escape me but I am quite grateful too.
Supporting numbers, lower is better, they are from the test I posted earlier.
cpuinfo baseline Rcu
tasks read-sec read-sec
1 0.0141 0.0141
2 0.0140 0.0142
4 0.0140 0.0141
8 0.0145 0.0140
16 0.0553 0.0168
32 0.1688 0.0549
64 0.5017 0.1690
128 1.7005 0.5038
256 5.2513 2.0804
512 8.0529 3.0162
Signed-off-by: Nathan Zimmer <nzimmer@sgi.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Oleg Nesterov [Wed, 20 Mar 2013 04:08:31 +0000 (15:08 +1100)]
coredump: change wait_for_dump_helpers() to use wait_event_interruptible()
wait_for_dump_helpers() calls wake_up/kill_fasync from inside the
wait_event-like loop. This is not needed and in fact this is not strictly
correct, we can/should do this only once after we change pipe->writers.
We could even check if it becomes zero.
Change this code to use use wait_event_interruptible(), this can also help
to make this wait freezable.
With this patch we check pipe->readers without pipe_lock(), this is fine.
Once we see pipe->readers == 1 we know that the handler decremented the
counter, this is all we need.
Signed-off-by: Oleg Nesterov <oleg@redhat.com> Acked-by: Mandeep Singh Baines <msb@chromium.org> Cc: Neil Horman <nhorman@redhat.com> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Oleg Nesterov [Wed, 20 Mar 2013 04:08:30 +0000 (15:08 +1100)]
coredump: introduce dump_interrupted()
By discussion with Mandeep.
Change dump_write(), dump_seek() and do_coredump() to check
signal_pending() and abort if it is true. dump_seek() does this only
before f_op->llseek(), otherwise it relies on dump_write().
We need this change to ensure that the coredump won't delay suspend, and
to ensure it reacts to SIGKILL "quickly enough", a core dump can take a
lot of time. In particular this can help oom-killer.
We add the new trivial helper, dump_interrupted() to add the comments and
to simplify the potential freezer changes. Perhaps it will have more
callers.
Ideally it should do try_to_freeze() but then we need the unpleasant
changes in dump_write() and wait_for_dump_helpers(). It is not trivial to
change dump_write() to restart if f_op->write() fails because of
freezing(). We need to handle the short writes, we need to clear
TIF_SIGPENDING (and we can't rely on recalc_sigpending() unless we change
it to check PF_DUMPCORE). And if the buggy f_op->write() sets
TIF_SIGPENDING we can not distinguish this case from the race with
freeze_task() + __thaw_task().
So we simply accept the fact that the freezer can truncate a core-dump but
at least you can reliably suspend. Hopefully we can tolerate this
unlikely case and the necessary complications doesn't worth a trouble.
But if we decide to make the coredumping freezable later we can do this on
top of this change.
Signed-off-by: Oleg Nesterov <oleg@redhat.com> Acked-by: Mandeep Singh Baines <msb@chromium.org> Cc: Neil Horman <nhorman@redhat.com> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Oleg Nesterov [Wed, 20 Mar 2013 04:08:30 +0000 (15:08 +1100)]
coredump: sanitize the setting of signal->group_exit_code
Now that the coredumping process can be SIGKILL'ed, the setting of
->group_exit_code in do_coredump() can race with complete_signal() and
SIGKILL or 0x80 can be "lost", or wait(status) can report status ==
SIGKILL | 0x80.
But the main problem is that it is not clear to me what should we do if
binfmt->core_dump() succeeds but SIGKILL was sent, that is why this patch
comes as a separate change.
This patch adds 0x80 if ->core_dump() succeeds and the process was not
killed. But perhaps we can (should?) re-set ->group_exit_code changed by
SIGKILL back to "siginfo->si_signo |= 0x80" in case when core_dumped == T.
Signed-off-by: Oleg Nesterov <oleg@redhat.com> Tested-by: Mandeep Singh Baines <msb@chromium.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Neil Horman <nhorman@redhat.com> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Roland McGrath <roland@hack.frob.com> Cc: Tejun Heo <tj@kernel.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Oleg Nesterov [Wed, 20 Mar 2013 04:08:30 +0000 (15:08 +1100)]
coredump: ensure that SIGKILL always kills the dumping thread
prepare_signal() blesses SIGKILL sent to the dumping process but this
signal can be "lost" anyway. The problems is, complete_signal() sees
SIGNAL_GROUP_EXIT and skips the "kill them all" logic. And even if the
dumping process is single-threaded (so the target is always "correct"),
the group-wide SIGKILL is not recorded in task->pending and thus
__fatal_signal_pending() won't be true. A multi-threaded case has even
more problems.
And even ignoring all technical details, SIGNAL_GROUP_EXIT doesn't look
right to me. This coredumping process is not exiting yet, it can do a lot
of work dumping the core.
With this patch the dumping process doesn't have SIGNAL_GROUP_EXIT, we set
signal->group_exit_task instead. This makes signal_group_exit() true and
thus this should equally close the races with exit/exec/stop but allows to
kill the dumping thread reliably.
Notes:
- It is not clear what should we do with ->group_exit_code
if the dumper was killed, see the next change.
- we need more (hopefully straightforward) changes to ensure
that SIGKILL actually interrupts the coredump. Basically we
need to check __fatal_signal_pending() in dump_write() and
dump_seek().
Signed-off-by: Oleg Nesterov <oleg@redhat.com> Tested-by: Mandeep Singh Baines <msb@chromium.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Neil Horman <nhorman@redhat.com> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Roland McGrath <roland@hack.frob.com> Cc: Tejun Heo <tj@kernel.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Oleg Nesterov [Wed, 20 Mar 2013 04:08:29 +0000 (15:08 +1100)]
coredump: only SIGKILL should interrupt the coredumping task
There are 2 well known and ancient problems with coredump/signals, and a
lot of related bug reports:
- do_coredump() clears TIF_SIGPENDING but of course this can't help
if, say, SIGCHLD comes after that.
In this case the coredump can fail unexpectedly. See for example
wait_for_dump_helper()->signal_pending() check but there are other
reasons.
- At the same time, dumping a huge core on the slow media can take a
lot of time/resources and there is no way to kill the coredumping
task reliably. In particular this is not oom_kill-friendly.
This patch tries to fix the 1st problem, and makes the preparation for the
next changes.
We add the new SIGNAL_GROUP_COREDUMP flag set by zap_threads() to indicate
that this process dumps the core. prepare_signal() checks this flag and
nacks any signal except SIGKILL.
Note that this check tries to be conservative, in the long term we should
probably treat the SIGNAL_GROUP_EXIT case equally but this needs more
discussion. See marc.info/?l=linux-kernel&m=120508897917439
Notes:
- recalc_sigpending() doesn't check SIGNAL_GROUP_COREDUMP.
The patch assumes that dump_write/etc paths should never
call it, but we can change it as well.
- There is another source of TIF_SIGPENDING, freezer. This
will be addressed separately.
Signed-off-by: Oleg Nesterov <oleg@redhat.com> Tested-by: Mandeep Singh Baines <msb@chromium.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Neil Horman <nhorman@redhat.com> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Roland McGrath <roland@hack.frob.com> Cc: Tejun Heo <tj@kernel.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Lucas De Marchi [Wed, 20 Mar 2013 04:08:29 +0000 (15:08 +1100)]
kmod: remove call_usermodehelper_fns()
This function suffers from not being able to determine if the cleanup is
called in case it returns -ENOMEM. Nobody is using it anymore, so let's
remove it.
Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi> Cc: Oleg Nesterov <oleg@redhat.com> Cc: David Howells <dhowells@redhat.com> Cc: James Morris <james.l.morris@oracle.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Tejun Heo <tj@kernel.org> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Lucas De Marchi [Wed, 20 Mar 2013 04:08:29 +0000 (15:08 +1100)]
usermodehelper: split remaining calls to call_usermodehelper_fns()
These are the only users of call_usermodehelper_fns(). This function
suffers from not being able to determine if the cleanup is called. Even
if in this places the cleanup pointer is NULL, convert them to use the
separate call_usermodehelper_setup() + call_usermodehelper_exec()
functions so we can remove the _fns variant.
Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi> Cc: Oleg Nesterov <oleg@redhat.com> Cc: David Howells <dhowells@redhat.com> Cc: James Morris <james.l.morris@oracle.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Tejun Heo <tj@kernel.org> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>