]> git.karo-electronics.de Git - linux-beck.git/log
linux-beck.git
14 years agoblock: fix missing export of blk_types.h
Jens Axboe [Thu, 5 Aug 2010 06:34:13 +0000 (08:34 +0200)]
block: fix missing export of blk_types.h

Stephen reports:

  After merging the block tree, today's linux-next build (x86_64
  allmodconfig) failed like this:

  usr/include/linux/fs.h:11: included file 'linux/blk_types.h' is not exported

  Caused by commit 9d3dbbcd9a84518ff5e32ffe671d06a48cf84fd9 ("bio, fs:
  separate out bio_types.h and define READ/WRITE constants in terms of
  BIO_RW_* flags").

Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: fix bad _bh spinlock nesting
Jens Axboe [Wed, 4 Aug 2010 11:34:31 +0000 (13:34 +0200)]
writeback: fix bad _bh spinlock nesting

Fix a bug where a lock is _bh nested within another _bh lock,
but forgets to use the _bh variant for unlock.

Further more, it's not necessary to test _bh locks, the inner lock
can just use spin_lock(). So fix up the bug by making that change.

Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agodrbd: revert "delay probes", feature is being re-implemented differently
Lars Ellenberg [Tue, 3 Aug 2010 18:20:20 +0000 (20:20 +0200)]
drbd: revert "delay probes", feature is being re-implemented differently

It was a now abandoned attempt to throttle resync bandwidth
based on the delay it causes on the bulk data socket.
It has no userbase yet, and has been disabled by
9173465ccb51c09cc3102a10af93e9f469a0af6f already.
This removes the now unused code.

The basic feature, namely using up "idle" bandwith
of network and disk IO subsystem, with minimal impact
to application IO, is being reimplemented differently.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agodrbd: Initialize all members of sync_conf to their defaults [Bugz 315]
Philipp Reisner [Tue, 29 Jun 2010 15:35:34 +0000 (17:35 +0200)]
drbd: Initialize all members of sync_conf to their defaults [Bugz 315]

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Cc: stable@kernel.org
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agodrbd: Disable delay probes for the upcomming release
Philipp Reisner [Mon, 19 Jul 2010 13:04:57 +0000 (15:04 +0200)]
drbd: Disable delay probes for the upcomming release

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Cc: stable@kernel.org
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: cleanup bdi_register
Artem Bityutskiy [Sun, 25 Jul 2010 11:29:25 +0000 (14:29 +0300)]
writeback: cleanup bdi_register

This patch makes sure we first initialize everything and set the BDI_registered
flag, and only after this we add the bdi to 'bdi_list'. Current code adds the
bdi to the list too early, and as a result I the

WARN(!test_bit(BDI_registered, &bdi->state)

in bdi forker is triggered. Also, it is in general good practice to make things
visible only when they are fully initialized.

Also, this patch does few micro clean-ups:
1. Removes the 'exit' label which does not do anything, just returns. This
   allows to get rid of few braces and 'ret' variable and make the code smaller.
2. If 'kthread_run()' fails, remove the error code it returns, not hard-coded
   '-ENOMEM'. Theoretically, some day 'kthread_run()' can return something
   else. Also, in case of failure it is not necessary to set 'bdi->wb.task' to
   NULL.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: add new tracepoints
Artem Bityutskiy [Sun, 25 Jul 2010 11:29:24 +0000 (14:29 +0300)]
writeback: add new tracepoints

Add 2 new trace points to the periodic write-back wake up case, just like we do
in the 'bdi_queue_work()' function. Namely, introduce:

1. trace_writeback_wake_thread(bdi)
2. trace_writeback_wake_forker_thread(bdi)

The first event is triggered every time we wake up a bdi thread to start
periodic background write-out. The second event is triggered only when the bdi
thread does not exist and should be created by the forker thread.

This patch was suggested by Dave Chinner and Christoph Hellwig.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: remove unnecessary init_timer call
Artem Bityutskiy [Sun, 25 Jul 2010 11:29:23 +0000 (14:29 +0300)]
writeback: remove unnecessary init_timer call

The 'setup_timer()' function also calls 'init_timer()', so the extra
'init_timer()' call is not needed. Indeed, 'setup_timer()' is basically
'init_timer()' plus callback function and data pointers initialization.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: optimize periodic bdi thread wakeups
Artem Bityutskiy [Sun, 25 Jul 2010 11:29:22 +0000 (14:29 +0300)]
writeback: optimize periodic bdi thread wakeups

Whe the first inode for a bdi is marked dirty, we wake up the bdi thread which
should take care of the periodic background write-out. However, the write-out
will actually start only 'dirty_writeback_interval' centisecs later, so we can
delay the wake-up.

This change was requested by Nick Piggin who pointed out that if we delay the
wake-up, we weed out 2 unnecessary contex switches, which matters because
'__mark_inode_dirty()' is a hot-path function.

This patch introduces a new function - 'bdi_wakeup_thread_delayed()', which
sets up a timer to wake-up the bdi thread and returns. So the wake-up is
delayed.

We also delete the timer in bdi threads just before writing-back. And
synchronously delete it when unregistering bdi. At the unregister point the bdi
does not have any users, so no one can arm it again.

Since now we take 'bdi->wb_lock' in the timer, which can execute in softirq
context, we have to use 'spin_lock_bh()' for 'bdi->wb_lock'. This patch makes
this change as well.

This patch also moves the 'bdi_wb_init()' function down in the file to avoid
forward-declaration of 'bdi_wakeup_thread_delayed()'.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: prevent unnecessary bdi threads wakeups
Artem Bityutskiy [Sun, 25 Jul 2010 11:29:21 +0000 (14:29 +0300)]
writeback: prevent unnecessary bdi threads wakeups

Finally, we can get rid of unnecessary wake-ups in bdi threads, which are very
bad for battery-driven devices.

There are two types of activities bdi threads do:
1. process bdi works from the 'bdi->work_list'
2. periodic write-back

So there are 2 sources of wake-up events for bdi threads:

1. 'bdi_queue_work()' - submits bdi works
2. '__mark_inode_dirty()' - adds dirty I/O to bdi's

The former already has bdi wake-up code. The latter does not, and this patch
adds it.

'__mark_inode_dirty()' is hot-path function, but this patch adds another
'spin_lock(&bdi->wb_lock)' there. However, it is taken only in rare cases when
the bdi has no dirty inodes. So adding this spinlock should be fine and should
not affect performance.

This patch makes sure bdi threads and the forker thread do not wake-up if there
is nothing to do. The forker thread will nevertheless wake up at least every
5 min. to check whether it has to kill a bdi thread. This can also be optimized,
but is not worth it.

This patch also tidies up the warning about unregistered bid, and turns it from
an ugly crocodile to a simple 'WARN()' statement.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: move bdi threads exiting logic to the forker thread
Artem Bityutskiy [Sun, 25 Jul 2010 11:29:20 +0000 (14:29 +0300)]
writeback: move bdi threads exiting logic to the forker thread

Currently, bdi threads can decide to exit if there were no useful activities
for 5 minutes. However, this causes nasty races: we can easily oops in the
'bdi_queue_work()' if the bdi thread decides to exit while we are waking it up.

And even if we do not oops, but the bdi tread exits immediately after we wake
it up, we'd lose the wake-up event and have an unnecessary delay (up to 5 secs)
in the bdi work processing.

This patch makes the forker thread to be the central place which not only
creates bdi threads, but also kills them if they were inactive long enough.
This better design-wise.

Another reason why this change was done is to prepare for the further changes
which will prevent the bdi threads from waking up every 5 sec and wasting
power. Indeed, when the task does not wake up periodically anymore, it won't be
able to exit either.

This patch also moves the the 'wake_up_bit()' call from the bdi thread to the
forker thread as well. So now the forker thread sets the BDI_pending bit, then
forks the task or kills it, then clears the bit and wakes up the waiting
process.

The only process which may wain on the bit is 'bdi_wb_shutdown()'. This
function was changed as well - now it first removes the bdi from the
'bdi_list', then waits on the 'BDI_pending' bit. Once it wakes up, it is
guaranteed that the forker thread won't race with it, because the bdi is not
visible. Note, the forker thread sets the 'BDI_pending' bit under the
'bdi->wb_lock' which is essential for proper serialization.

And additionally, when we change 'bdi->wb.task', we now take the
'bdi->work_lock', to make sure that we do not lose wake-ups which we otherwise
would when raced with, say, 'bdi_queue_work()'.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: restructure bdi forker loop a little
Artem Bityutskiy [Sun, 25 Jul 2010 11:29:19 +0000 (14:29 +0300)]
writeback: restructure bdi forker loop a little

This patch re-structures the bdi forker a little:
1. Add 'bdi_cap_flush_forker(bdi)' condition check to the bdi loop. The reason
   for this is that the forker thread can start _before_ the 'BDI_registered'
   flag is set (see 'bdi_register()'), so the WARN() statement will fire for
   the default bdi. I observed this warning at boot-up.

2. Introduce an enum 'action' and use "switch" statement in the outer loop.
   This is a preparation to the further patch which will teach the forker
   thread killing bdi threads, so we'll have another case in the "switch"
   statement. This change was suggested by Christoph Hellwig.

This patch is just a small step towards the coming change where the forker
thread will kill the bdi threads. It should simplify reviewing the following
changes, which would otherwise be larger.

This patch also amends comments a little.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: move last_active to bdi
Artem Bityutskiy [Sun, 25 Jul 2010 11:29:18 +0000 (14:29 +0300)]
writeback: move last_active to bdi

Currently bdi threads use local variable 'last_active' which stores last time
when the bdi thread did some useful work. Move this local variable to 'struct
bdi_writeback'. This is just a preparation for the further patches which will
make the forker thread decide when bdi threads should be killed.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: do not remove bdi from bdi_list
Artem Bityutskiy [Sun, 25 Jul 2010 11:29:17 +0000 (14:29 +0300)]
writeback: do not remove bdi from bdi_list

The forker thread removes bdis from 'bdi_list' before forking the bdi thread.
But this is wrong for at least 2 reasons.

Reason #1: if we temporary remove a bdi from the list, we may miss works which
           would otherwise be given to us.

Reason #2: this is racy; indeed, 'bdi_wb_shutdown()' expects that bdis are
           always in the 'bdi_list' (see 'bdi_remove_from_list()'), and when
           it races with the forker thread, it can shut down the bdi thread
           at the same time as the forker creates it.

This patch makes sure the forker thread never removes bdis from 'bdi_list'
(which was suggested by Christoph Hellwig).

In order to make sure that we do not race with 'bdi_wb_shutdown()', we have to
hold the 'bdi_lock' while walking the 'bdi_list' and setting the 'BDI_pending'
flag.

NOTE! The error path is interesting. Currently, when we fail to create a bdi
thread, we move the bdi to the tail of 'bdi_list'. But if we never remove the
bdi from the list, we cannot move it to the tail either, because then we can
mess up the RCU readers which walk the list. And also, we'll have the race
described above in "Reason #2".

But I not think that adding to the tail is any important so I just do not do
that.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: simplify bdi code a little
Artem Bityutskiy [Sun, 25 Jul 2010 11:29:16 +0000 (14:29 +0300)]
writeback: simplify bdi code a little

This patch simplifies bdi code a little by removing the 'pending_list' which is
redundant. Indeed, currently the forker thread ('bdi_forker_thread()') is
working like this:

1. In a loop, fetch all bdi's which have works but have no writeback thread and
   move them to the 'pending_list'.
2. If the list is empty, sleep for 5 sec.
3. Otherwise, take one bdi from the list, fork the writeback thread for this
   bdi, and repeat the loop.

IOW, it first moves everything to the 'pending_list', then process only one
element, and so on. This patch simplifies the algorithm, which is now as
follows.

1. Find the first bdi which has a work and remove it from the global list of
   bdi's (bdi_list).
2. If there was not such bdi, sleep 5 sec.
3. Fork the writeback thread for this bdi and repeat the loop.

IOW, now we find the first bdi to process, process it, and so on. This is
simpler and involves less lists.

The bonus now is that we can get rid of a couple of functions, as well as
remove complications which involve 'rcu_call()' and 'bdi->rcu_head'.

This patch also makes sure we use 'list_add_tail_rcu()', instead of plain
'list_add_tail()', but this piece of code is going to be removed in the next
patch anyway.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: do not lose wake-ups in bdi threads
Artem Bityutskiy [Sun, 25 Jul 2010 11:29:15 +0000 (14:29 +0300)]
writeback: do not lose wake-ups in bdi threads

Currently, bdi threads ('bdi_writeback_thread()') can lose wake-ups. For
example, if 'bdi_queue_work()' is executed after the bdi thread have had
finished 'wb_do_writeback()' but before it called
'schedule_timeout_interruptible()'.

To fix this issue, we have to check whether we have works to process after we
have changed the task state to 'TASK_INTERRUPTIBLE'.

This patch also clean-ups handling of the cases when 'dirty_writeback_interval'
is zero or non-zero.

Additionally, this patch also removes unneeded 'list_empty_careful()' call.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: do not lose wake-ups in the forker thread - 2
Artem Bityutskiy [Sun, 25 Jul 2010 11:29:14 +0000 (14:29 +0300)]
writeback: do not lose wake-ups in the forker thread - 2

Currently, if someone submits jobs for the default bdi, we can lose wake-up
events. E.g., this can happen if 'bdi_queue_work()' is called when
'bdi_forker_thread()' is executing code after 'wb_do_writeback(me, 0)', but
before 'set_current_state(TASK_INTERRUPTIBLE)'.

This situation is unlikely, and the result is not very severe - we'll just
delay the execution of the work, but this is still not very nice.

This patch fixes the issue by checking whether the default bdi has works before
the forker thread goes sleep.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: do not lose wake-ups in the forker thread - 1
Artem Bityutskiy [Sun, 25 Jul 2010 11:29:13 +0000 (14:29 +0300)]
writeback: do not lose wake-ups in the forker thread - 1

Currently the forker thread can lose wake-ups which may lead to unnecessary
delays in processing bdi works. E.g., consider the following scenario.

1. 'bdi_forker_thread()' walks the 'bdi_list', finds out there is nothing to
   do, and is about to finish the loop.
2. A bdi thread decides to exit because it was inactive for long time.
3. 'bdi_queue_work()' adds a work to the bdi which just exited, so it wakes up
   the forker thread.
4. but 'bdi_forker_thread()' executes 'set_current_state(TASK_INTERRUPTIBLE)'
   and goes sleep. We lose a wake-up.

Losing the wake-up is not fatal, but this means that the bdi work processing
will be delayed by up to 5 sec. This race is theoretical, I never hit it, but
it is worth fixing.

The fix is to execute 'set_current_state(TASK_INTERRUPTIBLE)' _before_ walking
'bdi_list', not after.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: fix possible race when creating bdi threads
Artem Bityutskiy [Sun, 25 Jul 2010 11:29:12 +0000 (14:29 +0300)]
writeback: fix possible race when creating bdi threads

This patch fixes a very unlikely race condition on the bdi forker thread error
path: when bdi thread creation fails, 'bdi->wb.task' may contain the error code
for a short period of time. If at the same time someone submits a work to this
bdi, we can end up with an oops 'bdi_queue_work()' while executing
'wake_up_process(wb->task)'.

This patch fixes the issue by introducing a temporary variable 'task' and
storing the possible error code there, so that 'wb->task' would never take
erroneous values.

Note, this race is very unlikely and I never hit it, so it is theoretical, but
nevertheless worth fixing.

This patch also merges 2 comments which were previously separate.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: harmonize writeback threads naming
Artem Bityutskiy [Sun, 25 Jul 2010 11:29:11 +0000 (14:29 +0300)]
writeback: harmonize writeback threads naming

The write-back code mixes words "thread" and "task" for the same things. This
is not a big deal, but still an inconsistency.

hch: a convention I tend to use and I've seen in various places
is to always use _task for the storage of the task_struct pointer,
and thread everywhere else.  This especially helps with having
foo_thread for the actual thread and foo_task for a global
variable keeping the task_struct pointer

This patch renames:
* 'bdi_add_default_flusher_task()' -> 'bdi_add_default_flusher_thread()'
* 'bdi_forker_task()'              -> 'bdi_forker_thread()'

because bdi threads are 'bdi_writeback_thread()', so these names are more
consistent.

This patch also amends commentaries and makes them refer the forker and bdi
threads as "thread", not "task".

Also, while on it, make 'bdi_add_default_flusher_thread()' declaration use
'static void' instead of 'void static' and make checkpatch.pl happy.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocoda: fixup clash with block layer REQ_* defines
Jens Axboe [Tue, 3 Aug 2010 11:22:51 +0000 (13:22 +0200)]
coda: fixup clash with block layer REQ_* defines

CODA should not be using defines in the global name space of
that nature, prefix them with CODA_.

Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agobio, fs: separate out bio_types.h and define READ/WRITE constants in terms of BIO_RW_...
Tejun Heo [Tue, 3 Aug 2010 11:14:58 +0000 (13:14 +0200)]
bio, fs: separate out bio_types.h and define READ/WRITE constants in terms of BIO_RW_* flags

linux/fs.h hard coded READ/WRITE constants which should match BIO_RW_*
flags.  This is fragile and caused breakage during BIO_RW_* flag
rearrangement.  The hardcoding is to avoid include dependency hell.

Create linux/bio_types.h which contatins definitions for bio data
structures and flags and include it from bio.h and fs.h, and make fs.h
define all READ/WRITE related constants in terms of BIO_RW_* flags.

Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agobio, fs: update RWA_MASK, READA and SWRITE to match the corresponding BIO_RW_* bits
Tejun Heo [Tue, 3 Aug 2010 11:14:33 +0000 (13:14 +0200)]
bio, fs: update RWA_MASK, READA and SWRITE to match the corresponding BIO_RW_* bits

Commit a82afdf (block: use the same failfast bits for bio and request)
moved BIO_RW_* bits around such that they match up with REQ_* bits.
Unfortunately, fs.h hard coded RW_MASK, RWA_MASK, READ, WRITE, READA
and SWRITE as 0, 1, 2 and 3, and expected them to match with BIO_RW_*
bits.  READ/WRITE didn't change but BIO_RW_AHEAD was moved to bit 4
instead of bit 1, breaking RWA_MASK, READA and SWRITE.

This patch updates RWA_MASK, READA and SWRITE such that they match the
BIO_RW_* bits again.  A follow up patch will update the definitions to
directly use BIO_RW_* bits so that this kind of breakage won't happen
again.

Neil also spotted missing RWA_MASK conversion.

Stable: The offending commit a82afdf was released with v2.6.32, so
this patch should be applied to all kernels since then but it must
_NOT_ be applied to kernels earlier than that.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-and-bisected-by: Vladislav Bolkhovitin <vst@vlnb.net>
Root-caused-by: Neil Brown <neilb@suse.de>
Cc: stable@kernel.org
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblock: disallow FS recursion from sb_issue_discard allocation
Mike Snitzer [Tue, 3 Aug 2010 10:54:51 +0000 (12:54 +0200)]
block: disallow FS recursion from sb_issue_discard allocation

Filesystems can call sb_issue_discard on a memory reclaim path
(e.g. ext4 calls sb_issue_discard during journal commit).

Use GFP_NOFS in sb_issue_discard to avoid recursing back into the FS.

Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocpqarray: check put_user() result
Kulikov Vasiliy [Tue, 3 Aug 2010 10:52:55 +0000 (12:52 +0200)]
cpqarray: check put_user() result

put_user() may fail, if so return -EFAULT.

Signed-off-by: Kulikov Vasiliy <segooon@gmail.com>
Acked-by: Mike Miller <mike.miller@hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: remove wb in get_next_work_item
Minchan Kim [Tue, 3 Aug 2010 10:51:16 +0000 (12:51 +0200)]
writeback: remove wb in get_next_work_item

83ba7b07 cleans up the writeback.
So we don't use wb any more in get_next_work_item.
Let's remove unnecessary argument.

CC: Christoph Hellwig <hch@lst.de>
Signed-off-by: Minchan Kim <minchan.kim@gmail.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agosplice: fix misuse of SPLICE_F_NONBLOCK
Miklos Szeredi [Tue, 3 Aug 2010 10:48:50 +0000 (12:48 +0200)]
splice: fix misuse of SPLICE_F_NONBLOCK

SPLICE_F_NONBLOCK is clearly documented to only affect blocking on the
pipe.  In __generic_file_splice_read(), however, it causes an EAGAIN
if the page is currently being read.

This makes it impossible to write an application that only wants
failure if the pipe is full.  For example if the same process is
handling both ends of a pipe and isn't otherwise able to determine
whether a splice to the pipe will fill it or not.

We could make the read non-blocking on O_NONBLOCK or some other splice
flag, but for now this is the simplest fix.

Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
CC: stable@kernel.org
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoxen/blkfront: Use QUEUE_ORDERED_DRAIN for old backends
Jeremy Fitzhardinge [Wed, 28 Jul 2010 17:49:29 +0000 (10:49 -0700)]
xen/blkfront: Use QUEUE_ORDERED_DRAIN for old backends

If there's no feature-barrier key in xenstore, then it means its a fairly
old backend which does uncached in-order writes, which means ORDERED_DRAIN
is appropriate.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
14 years agoxen/blkfront: use tagged queuing for barriers
Jeremy Fitzhardinge [Thu, 22 Jul 2010 21:17:00 +0000 (14:17 -0700)]
xen/blkfront: use tagged queuing for barriers

When barriers are supported, then use QUEUE_ORDERED_TAG to tell the block
subsystem that it doesn't need to do anything else with the barriers.
Previously we used ORDERED_DRAIN which caused the block subsystem to
drain all pending IO before submitting the barrier, which would be
very expensive.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
14 years agoscsi: use REQ_TYPE_FS for flush request
FUJITA Tomonori [Fri, 9 Jul 2010 00:38:26 +0000 (09:38 +0900)]
scsi: use REQ_TYPE_FS for flush request

scsi-ml uses REQ_TYPE_BLOCK_PC for flush requests from file
systems. The definition of REQ_TYPE_BLOCK_PC is that we don't retry
requests even when we can (e.g. UNIT ATTENTION) and we send the
response to the callers (then the callers can decide what they want).
We need a workaround such as the commit
77a4229719e511a0d38d9c355317ae1469adeb54 to retry BLOCK_PC flush
requests. We will need the similar workaround for discard requests too
since SCSI-ml handle them as BLOCK_PC internally.

This uses REQ_TYPE_FS for flush requests from file systems instead of
REQ_TYPE_BLOCK_PC.

scsi-ml retries only REQ_TYPE_FS requests that have data to
transfer when we can retry them (e.g. UNIT_ATTENTION). However, we
also need to retry REQ_TYPE_FS requests without data because the
callers don't.

This also changes scsi_check_sense() to retry all the REQ_TYPE_FS
requests when appropriate. Thanks to scsi_noretry_cmd(),
REQ_TYPE_BLOCK_PC requests don't be retried as before.

Note that basically, this reverts the commit
77a4229719e511a0d38d9c355317ae1469adeb54 since now we use REQ_TYPE_FS
for flush requests.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblock: set up rq->rq_disk properly for flush requests
FUJITA Tomonori [Fri, 9 Jul 2010 00:38:25 +0000 (09:38 +0900)]
block: set up rq->rq_disk properly for flush requests

q->bar_rq.rq_disk is NULL. Use the rq_disk of the original request
instead.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblock: set REQ_TYPE_FS on flush requests
FUJITA Tomonori [Fri, 9 Jul 2010 00:38:24 +0000 (09:38 +0900)]
block: set REQ_TYPE_FS on flush requests

the block layer doesn't set rq->cmd_type on flush requests. By
definition, it should be REQ_TYPE_FS (the lower layers build a command
and interpret the result of it, that is, the block layer doesn't know
the details).

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agofloppy: make controller const
Stephen Hemminger [Wed, 21 Jul 2010 02:09:00 +0000 (20:09 -0600)]
floppy: make controller const

The struct cont_t is just a set of virtual function pointers.

Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agodrivers/block: use memdup_user
Julia Lawall [Wed, 21 Jul 2010 02:08:59 +0000 (20:08 -0600)]
drivers/block: use memdup_user

Use memdup_user when user data is immediately copied into the
allocated region.  Some checkpatch cleanups in nearby code.

The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)

// <smpl>
@@
expression from,to,size,flag;
position p;
identifier l1,l2;
@@

-  to = \(kmalloc@p\|kzalloc@p\)(size,flag);
+  to = memdup_user(from,size);
   if (
-      to==NULL
+      IS_ERR(to)
                 || ...) {
   <+... when != goto l1;
-  -ENOMEM
+  PTR_ERR(to)
   ...+>
   }
-  if (copy_from_user(to, from, size) != 0) {
-    <+... when != goto l2;
-    -EFAULT
-    ...+>
-  }
// </smpl>

Signed-off-by: Julia Lawall <julia@diku.dk>
Cc: Chirag Kantharia <chirag.kantharia@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoscsi: convert discard to REQ_TYPE_FS from REQ_TYPE_BLOCK_PC
FUJITA Tomonori [Wed, 21 Jul 2010 01:29:37 +0000 (10:29 +0900)]
scsi: convert discard to REQ_TYPE_FS from REQ_TYPE_BLOCK_PC

Jens, any reason why this isn't included in your for-2.6.36 yet?

=
From: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Subject: [PATCH resend] scsi: convert discard to REQ_TYPE_FS from REQ_TYPE_BLOCK_PC

The block layer (file systems) sends discard requests as REQ_TYPE_FS
(the role of REQ_TYPE_FS is that setting up commands and interpreting
the results). But SCSI-ml treats discard requests as
REQ_TYPE_BLOCK_PC.

scsi-ml can handle discard requests as REQ_TYPE_FS
easily. scsi_setup_discard_cmnd() sets up struct request and the bio
nicely. Only remaining issue is that discard requests can't be
completed partially so we need to modify sd_done.

This conversion also fixes the problem that discard requests aren't
retried when possible (e.g. UNIT ATTENTION).

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: cleanup interrupt_not_for_us
Stephen M. Cameron [Mon, 19 Jul 2010 18:46:54 +0000 (13:46 -0500)]
cciss: cleanup interrupt_not_for_us

cciss: cleanup interrupt_not_for_us
In the case of MSI/MSIX interrutps, we don't need to check
if the interrupt is for us, and in the case of the intx interrupt
handler, when checking if the interrupt is for us, we don't need
to check if we're using MSI/MSIX, we know we're not.

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: change printks to dev_warn, etc.
Stephen M. Cameron [Mon, 19 Jul 2010 18:46:48 +0000 (13:46 -0500)]
cciss: change printks to dev_warn, etc.

cciss: change printks to dev_warn, etc.

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: separate cmd_alloc() and cmd_special_alloc()
Stephen M. Cameron [Mon, 19 Jul 2010 18:46:43 +0000 (13:46 -0500)]
cciss: separate cmd_alloc() and cmd_special_alloc()

cciss: separate cmd_alloc() and cmd_special_alloc()
cmd_alloc() took a parameter which caused it to either allocate
from a pre-allocated pool, or allocate using pci_alloc_consistent.
This parameter is always known at compile time, so this would
be better handled by breaking the function into two functions
and differentiating the cases by function names.  Same goes
for cmd_free().

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: use consistent variable names
Stephen M. Cameron [Mon, 19 Jul 2010 18:46:38 +0000 (13:46 -0500)]
cciss: use consistent variable names

cciss: use consistent variable names
"h", for the hba structure and "c" for the command structures.
and get rid of trivial CCISS_LOCK macro.

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: forbid hard reset of 640x boards
Stephen M. Cameron [Mon, 19 Jul 2010 18:46:33 +0000 (13:46 -0500)]
cciss: forbid hard reset of 640x boards

cciss: forbid hard reset of 640x boards
The 6402/6404 are two PCI devices -- two Smart Array controllers
-- that fit into one slot.  It is possible to reset them independently,
however, they share a battery backed cache module.  One of the pair
controls the cache and the 2nd one access the cache through the first
one.  If you reset the one controlling the cache, the other one will
not be a happy camper.  So we just forbid resetting this conjoined
mess.

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: sanitize max commands
Stephen M. Cameron [Mon, 19 Jul 2010 18:46:28 +0000 (13:46 -0500)]
cciss: sanitize max commands

cciss: sanitize max commands
Some controllers might try to tell us they support 0 commands
in performant mode.  This is a lie told by buggy firmware.
We have to be wary of this lest we try to allocate a negative
number of command blocks, which will be treated as unsigned,
and get an out of memory condition.

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: fix hard reset code.
Stephen M. Cameron [Mon, 19 Jul 2010 18:46:22 +0000 (13:46 -0500)]
cciss: fix hard reset code.

cciss: Fix hard reset code.
Smart Array controllers newer than the P600 do not honor the
PCI power state method of resetting the controllers.  Instead,
in these cases we can get them to reset via the "doorbell" register.

This escaped notice until we began using "performant" mode because
the fact that the controllers did not reset did not normally
impede subsequent operation, and so things generally appeared to
"work".  Once the performant mode code was added, if the controller
does not reset, it remains in performant mode.  The code immediately
after the reset presumes the controller is in "simple" mode
(which previously, it had remained in simple mode the whole time).
If the controller remains in performant mode any code which presumes
it is in simple mode will not work.  So the reset needs to be fixed.

Unfortunately there are some controllers which cannot be reset by
either method. (eg. p800).  We detect these cases by noticing that
the controller seems to remain in performant mode even after a
reset has been attempted.  In those cases we ignore the controller,
as any commands outstanding on it will result in stale completions.
To sum up, we try to do a better job of resetting the controller if
"reset_devices" is set, and if it doesn't work, we ignore that
controller.

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: factor out cciss_reset_devices()
Stephen M. Cameron [Mon, 19 Jul 2010 18:46:17 +0000 (13:46 -0500)]
cciss: factor out cciss_reset_devices()

cciss: factor out cciss_reset_devices()

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: factor out cciss_find_cfg_addrs.
Stephen M. Cameron [Mon, 19 Jul 2010 18:46:12 +0000 (13:46 -0500)]
cciss: factor out cciss_find_cfg_addrs.

Rationale for this is that I will also need to use this code
in fixing kdump host reset code prior to having the hba structure.

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: factor out cciss_enter_performant_mode
Stephen M. Cameron [Mon, 19 Jul 2010 18:46:07 +0000 (13:46 -0500)]
cciss: factor out cciss_enter_performant_mode

cciss: factor out cciss_enter_performant_mode

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: factor out cciss_wait_for_mode_change_ack()
Stephen M. Cameron [Mon, 19 Jul 2010 18:46:01 +0000 (13:46 -0500)]
cciss: factor out cciss_wait_for_mode_change_ack()

cciss: factor out cciss_wait_for_mode_change_ack()

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: make cciss_put_controller_into_performant_mode as __devinit
Stephen M. Cameron [Mon, 19 Jul 2010 18:45:56 +0000 (13:45 -0500)]
cciss: make cciss_put_controller_into_performant_mode as __devinit

cciss: make cciss_put_controller_into_performant_mode as __devinit

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: cleanup some debug ifdefs
Stephen M. Cameron [Mon, 19 Jul 2010 18:45:51 +0000 (13:45 -0500)]
cciss: cleanup some debug ifdefs

cciss: cleanup some debug ifdefs

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: factor out cciss_p600_dma_prefetch_quirk()
Stephen M. Cameron [Mon, 19 Jul 2010 18:45:46 +0000 (13:45 -0500)]
cciss: factor out cciss_p600_dma_prefetch_quirk()

cciss: factor out cciss_p600_dma_prefetch_quirk()

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: factor out cciss_enable_scsi_prefetch()
Stephen M. Cameron [Mon, 19 Jul 2010 18:45:41 +0000 (13:45 -0500)]
cciss: factor out cciss_enable_scsi_prefetch()

cciss: factor out cciss_enable_scsi_prefetch()

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: factor out CISS_signature_present()
Stephen M. Cameron [Mon, 19 Jul 2010 18:45:36 +0000 (13:45 -0500)]
cciss: factor out CISS_signature_present()

cciss: factor out CISS_signature_present()

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: factor out cciss_find_board_params
Stephen M. Cameron [Mon, 19 Jul 2010 18:45:31 +0000 (13:45 -0500)]
cciss: factor out cciss_find_board_params

cciss: factor out cciss_find_board_params

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: fix leak of ioremapped memory
Stephen M. Cameron [Mon, 19 Jul 2010 18:45:26 +0000 (13:45 -0500)]
cciss: fix leak of ioremapped memory

cciss: fix leak of ioremapped memory
in cciss_pci_init error path.

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: factor out cciss_find_cfgtables
Stephen M. Cameron [Mon, 19 Jul 2010 18:45:21 +0000 (13:45 -0500)]
cciss: factor out cciss_find_cfgtables

cciss: factor out cciss_find_cfgtables

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: factor out cciss_wait_for_board_ready()
Stephen M. Cameron [Mon, 19 Jul 2010 18:45:15 +0000 (13:45 -0500)]
cciss: factor out cciss_wait_for_board_ready()

cciss: factor out cciss_wait_for_board_ready()

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: factor out cciss_find_memory_BAR()
Stephen M. Cameron [Mon, 19 Jul 2010 18:45:10 +0000 (13:45 -0500)]
cciss: factor out cciss_find_memory_BAR()

cciss: factor out cciss_find_memory_BAR()

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: remove board_id parameter from cciss_interrupt_mode()
Stephen M. Cameron [Mon, 19 Jul 2010 18:45:05 +0000 (13:45 -0500)]
cciss: remove board_id parameter from cciss_interrupt_mode()

cciss: remove board_id parameter from cciss_interrupt_mode()

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: factor out cciss_board_disabled
Stephen M. Cameron [Mon, 19 Jul 2010 18:45:00 +0000 (13:45 -0500)]
cciss: factor out cciss_board_disabled

cciss: factor out cciss_board_disabled

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: factor out cciss_lookup_board_id
Stephen M. Cameron [Mon, 19 Jul 2010 18:44:55 +0000 (13:44 -0500)]
cciss: factor out cciss_lookup_board_id

cciss: factor out cciss_lookup_board_id

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: save pdev pointer in per hba structure early to avoid passing it around so...
Stephen M. Cameron [Mon, 19 Jul 2010 18:44:50 +0000 (13:44 -0500)]
cciss: save pdev pointer in per hba structure early to avoid passing it around so much.

cciss: save pdev pointer in per hba structure early to avoid passing it around so much.

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agocciss: Set the performant mode bit in the scsi half of the driver
Stephen M. Cameron [Mon, 19 Jul 2010 18:44:45 +0000 (13:44 -0500)]
cciss: Set the performant mode bit in the scsi half of the driver

cciss: Set the performant mode bit in the scsi half of the driver
In a couple of places, the performant mode bit wasn't being set in
the scsi half of the driver, causing commands to seem to hang.  Use
enqueue_cmd_and_start_io() where appropriate.  This fixes a bug that

echo engage scsi > /proc/driver/cciss/cciss0

would hang.

Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblkfront: Klog the unclean release path
Daniel Stodden [Sat, 7 Aug 2010 16:51:21 +0000 (18:51 +0200)]
blkfront: Klog the unclean release path

Signed-off-by: Daniel Stodden <daniel.stodden@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblkfront: Remove obsolete info->users
Daniel Stodden [Fri, 30 Apr 2010 22:01:23 +0000 (22:01 +0000)]
blkfront: Remove obsolete info->users

This is just bd_openers, protected by the bd_mutex.

Signed-off-by: Daniel Stodden <daniel.stodden@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblkfront: Remove obsolete info->users
Daniel Stodden [Sat, 7 Aug 2010 16:47:26 +0000 (18:47 +0200)]
blkfront: Remove obsolete info->users

This is just bd_openers, protected by the bd_mutex.

Signed-off-by: Daniel Stodden <daniel.stodden@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblkfront: Lock blockfront_info during xbdev removal
Daniel Stodden [Fri, 30 Apr 2010 22:01:22 +0000 (22:01 +0000)]
blkfront: Lock blockfront_info during xbdev removal

Same approach as blkfront_closing:
 * Grab the bdev safely, holding the info mutex.
 * Zap xbdev safely, holding the info mutex.
 * Try bdev removal safely, holding bd_mutex.

Signed-off-by: Daniel Stodden <daniel.stodden@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
14 years agoblkfront: Fix blkfront backend switch race (bdev release)
Daniel Stodden [Sat, 7 Aug 2010 16:45:12 +0000 (18:45 +0200)]
blkfront: Fix blkfront backend switch race (bdev release)

We cannot read backend state within bdev operations, because it risks
grabbing the state change before xenbus gets to do it.

Fixed by tracking deferral with a frontend switch to Closing. State
exposure isn't strictly necessary, but the backends won't mind.

For a 'clean' deferral this seems actually a more decent protocol than
raising errors.

Signed-off-by: Daniel Stodden <daniel.stodden@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblkfront: Fix blkfront backend switch race (bdev open)
Daniel Stodden [Sat, 7 Aug 2010 16:36:53 +0000 (18:36 +0200)]
blkfront: Fix blkfront backend switch race (bdev open)

We need not mind if users grab a late handle on a closing disk. We
probably even should not. But we have to make sure it's not a dead
one already

Let the bdev deal with a gendisk deleted under its feet. Takes the
info mutex to decide a race against backend closing.

Signed-off-by: Daniel Stodden <daniel.stodden@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblkfront: Lock blkfront_info when closing
Daniel Stodden [Fri, 30 Apr 2010 22:01:19 +0000 (22:01 +0000)]
blkfront: Lock blkfront_info when closing

The bdev .open/.release fops race against backend switches to Closing,
handled by the XenBus thread.

The original code attempted to serialize block device holders and
xenbus only via bd_mutex. This is insufficient, the info->bd pointer
may already be stale (or null) while xenbus tries to bump up the
refcount.

Protect blkfront_info with a dedicated mutex.

Signed-off-by: Daniel Stodden <daniel.stodden@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
14 years agoblkfront: Clean up vbd release
Daniel Stodden [Sat, 7 Aug 2010 16:33:17 +0000 (18:33 +0200)]
blkfront: Clean up vbd release

 * Current blkfront_closing is rather a xlvbd_release_gendisk.
   Renamed in preparation of later patches (need the name again).

 * Removed the misleading comment -- this only applied to the backend
   switch handler, and the queue is already flushed btw.

 * Break out the xenbus call, callers know better when to switch
   frontend state.

Signed-off-by: Daniel Stodden <daniel.stodden@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblkfront: Fix gendisk leak
Daniel Stodden [Fri, 30 Apr 2010 22:01:17 +0000 (22:01 +0000)]
blkfront: Fix gendisk leak

Signed-off-by: Daniel Stodden <daniel.stodden@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
14 years agoblkfront: Fix backtrace in del_gendisk
Daniel Stodden [Fri, 30 Apr 2010 22:01:16 +0000 (22:01 +0000)]
blkfront: Fix backtrace in del_gendisk

The call to del_gendisk follows an non-refcounted gd->queue
pointer. We release the last ref in blk_cleanup_queue. Fixed by
reordering releases accordingly.

Signed-off-by: Daniel Stodden <daniel.stodden@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
14 years agoxenbus: Make xenbus_switch_state transactional
Daniel Stodden [Fri, 30 Apr 2010 22:01:15 +0000 (22:01 +0000)]
xenbus: Make xenbus_switch_state transactional

According to the comments, this was how it's been done years ago, but
apparently took an xbt pointer from elsewhere back then. The code was
removed because of consistency issues: cancellation wont't roll back
the saved xbdev->state.

Still, unsolicited writes to the state field remain an issue,
especially if device shutdown takes thread synchronization, and subtle
races cause accidental recreation of the device node.

Fixed by reintroducing the transaction. An internal one is sufficient,
so the xbdev->state value remains consistent.

Also fixes the original hack to prevent infinite recursion. Instead of
bailing out on the first attempt to switch to Closing, checks call
depth now.

Signed-off-by: Daniel Stodden <daniel.stodden@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
14 years agoxen/blkfront: revalidate after setting capacity
K. Y. Srinivasan [Thu, 18 Mar 2010 22:00:54 +0000 (15:00 -0700)]
xen/blkfront: revalidate after setting capacity

Signed-off-by: K. Y. Srinivasan <ksrinivasan@novell.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
14 years agoxen/blkfront: avoid compiler warning from missing cases
Jeremy Fitzhardinge [Thu, 11 Mar 2010 23:10:40 +0000 (15:10 -0800)]
xen/blkfront: avoid compiler warning from missing cases

Fix:
drivers/block/xen-blkfront.c: In function ‘blkfront_connect’:
drivers/block/xen-blkfront.c:933: warning: enumeration value ‘BLKIF_STATE_DISCONNECTED’ not handled in switch

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
14 years agoxen/front: Propagate changed size of VBDs
K. Y. Srinivasan [Thu, 11 Mar 2010 21:42:26 +0000 (13:42 -0800)]
xen/front: Propagate changed size of VBDs

Support dynamic resizing of virtual block devices. This patch supports
both file backed block devices as well as physical devices that can be
dynamically resized on the host side.

Signed-off-by: K. Y. Srinivasan <ksrinivasan@novell.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
14 years agoblkfront: don't access freed struct xenbus_device
Jan Beulich [Sat, 7 Aug 2010 16:31:12 +0000 (18:31 +0200)]
blkfront: don't access freed struct xenbus_device

Unfortunately commit "blkfront: fixes for 'xm block-detach ... --force'"
still wasn't quite right - there was a reference to freed memory left
from blkfront_closing().

Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblkfront: fixes for 'xm block-detach ... --force'
Jan Beulich [Sat, 7 Aug 2010 16:28:55 +0000 (18:28 +0200)]
blkfront: fixes for 'xm block-detach ... --force'

Prevent prematurely freeing 'struct blkfront_info' instances (when the
xenbus data structures are gone, but the Linux ones are still needed).

Prevent adding a disk with the same (major, minor) [and hence the same
name and sysfs entries, which leads to oopses] when the previous
instance wasn't fully de-allocated yet.

This still doesn't address all issues resulting from forced detach:
I/O submitted after the detach still blocks forever, likely preventing
subsequent un-mounting from completing. It's not clear to me (not
knowing much about the block layer) how this can be avoided.

Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoxen: use less generic names in blkfront driver.
Ian Campbell [Fri, 4 Dec 2009 15:33:54 +0000 (15:33 +0000)]
xen: use less generic names in blkfront driver.

All Xen frontend drivers have a couple of identically named functions which
makes figuring out which device went wrong from a stacktrace harder than it
needs to be. Rename them to something specificto the device type.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
14 years agowriteback.h: needs linux/device.h
Randy Dunlap [Mon, 19 Jul 2010 23:49:17 +0000 (16:49 -0700)]
writeback.h: needs linux/device.h

include/trace/events/writeback.h uses dev_name(), so it needs to
include linux/device.h.

include/trace/events/writeback.h:12: error: implicit declaration of function 'dev_name'

Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblock: fix problem with sending down discard that isn't of correct granularity
Jens Axboe [Thu, 15 Jul 2010 16:49:31 +0000 (10:49 -0600)]
block: fix problem with sending down discard that isn't of correct granularity

If the queue doesn't have a limit set, or it just set UINT_MAX like
we default to, we coud be sending down a discard request that isn't
of the correct granularity if the block size is > 512b.

Fix this by adjusting max_discard_sectors down to the proper
alignment.

Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblkdev: check for valid request queue before issuing flush
Dave Chinner [Tue, 13 Jul 2010 07:50:50 +0000 (17:50 +1000)]
blkdev: check for valid request queue before issuing flush

Issuing a blkdev_issue_flush() on an unconfigured loop device causes a panic as
q->make_request_fn is not configured. This can occur when trying to mount the
unconfigured loop device as an XFS filesystem. There are no guards that catch
the bio before the request function is called because we don't add a payload to
the bio. Instead, manually check this case as soon as we have a pointer to the
queue to flush.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblock: fix for block tracing build error
Stephen Rothwell [Fri, 9 Jul 2010 04:24:38 +0000 (14:24 +1000)]
block: fix for block tracing build error

block/compat_ioctl.c: In function 'compat_blkdev_ioctl':
block/compat_ioctl.c:754: error: 'BLKTRACESETUP32' undeclared (first use in this function)

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoscsi/i2o: restore ioctl changes
Arnd Bergmann [Thu, 8 Jul 2010 12:57:03 +0000 (14:57 +0200)]
scsi/i2o: restore ioctl changes

This restores the changes from "scsi/i2o_block: cleanup ioctl
handling", which accidentally got reverted.

Origignal changelog:
      This fixes the ioctl function of the i2o_block driver, which
      has multiple problems:

      * The BLKI2OSRSTRAT and BLKI2OSWSTRAT commands always return
        -ENOTTY on success, where they should return 0.
      * Support for 32 bit compat is missing
      * The driver should use the .ioctl function and because
        .locked_ioctl is going away.

      The use of the big kernel lock remains for now, but gets
      made explictit in the ioctl function.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoscsi/sd: remove big kernel lock
Arnd Bergmann [Wed, 7 Jul 2010 14:51:29 +0000 (16:51 +0200)]
scsi/sd: remove big kernel lock

Every user of the BKL in the sd driver is the
result of the pushdown from the block layer
into the open/close/ioctl functions.

The only place that used to rely on the BKL is
the sdkp->openers variable, which gets converted
into an atomic_t.

Nothing else seems to rely on the BKL, since the
functions do not touch global data without holding
another lock, and the open/close functions are
still protected from concurrent execution using
the bdev->bd_mutex.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: linux-scsi@vger.kernel.org
Cc: "James E.J. Bottomley" <James.Bottomley@suse.de>
Acked-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblock: remove BKL from partition ioctls
Arnd Bergmann [Wed, 7 Jul 2010 14:51:28 +0000 (16:51 +0200)]
block: remove BKL from partition ioctls

The blkpg_ioctl and blkdev_reread_part access fields of
the bdev and gendisk structures, yet they always do so
under the protection of bdev->bd_mutex, which seems
sufficient.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
cked-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblock: remove BKL from BLKROSET and BLKFLSBUF
Arnd Bergmann [Wed, 7 Jul 2010 14:51:27 +0000 (16:51 +0200)]
block: remove BKL from BLKROSET and BLKFLSBUF

We only call the functions set_device_ro(),
invalidate_bdev(), sync_filesystem() and sync_blockdev()
while holding the BKL in these commands. All
of these are also done in other code paths without
the BKL, which leads me to the conclusion that
the BKL is not needed here either.

The reason we hold it here is that it was originally
pushed down into the ioctl function from vfs_ioctl.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblock: push BKL into blktrace ioctls
Arnd Bergmann [Wed, 7 Jul 2010 14:51:26 +0000 (16:51 +0200)]
block: push BKL into blktrace ioctls

The blktrace driver currently needs the BKL, but
we should not need to take that in the block layer,
so just push it down into the driver itself.

It is quite likely that the BKL is not actually
required in blktrace code and could be removed
in a follow-on patch.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblock: push down BKL into .open and .release
Arnd Bergmann [Sat, 7 Aug 2010 16:25:34 +0000 (18:25 +0200)]
block: push down BKL into .open and .release

The open and release block_device_operations are currently
called with the BKL held. In order to change that, we must
first make sure that all drivers that currently rely
on this have no regressions.

This blindly pushes the BKL into all .open and .release
operations for all block drivers to prepare for the
next step. The drivers can subsequently replace the BKL
with their own locks or remove it completely when it can
be shown that it is not needed.

The functions blkdev_get and blkdev_put are the only
remaining users of the big kernel lock in the block
layer, besides a few uses in the ioctl code, none
of which need to serialize with blkdev_{get,put}.

Most of these two functions is also under the protection
of bdev->bd_mutex, including the actual calls to
->open and ->release, and the common code does not
access any global data structures that need the BKL.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblock: push down BKL into .locked_ioctl
Arnd Bergmann [Thu, 8 Jul 2010 08:18:46 +0000 (10:18 +0200)]
block: push down BKL into .locked_ioctl

As a preparation for the removal of the big kernel
lock in the block layer, this removes the BKL
from the common ioctl handling code, moving it
into every single driver still using it.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoscsi/i2o_block: cleanup ioctl handling
Arnd Bergmann [Wed, 7 Jul 2010 14:51:23 +0000 (16:51 +0200)]
scsi/i2o_block: cleanup ioctl handling

This fixes the ioctl function of the i2o_block driver, which
has multiple problems:

* The BLKI2OSRSTRAT and BLKI2OSWSTRAT commands always return
  -ENOTTY on success, where they should return 0.
* Support for 32 bit compat is missing
* The driver should use the .ioctl function and because
  .locked_ioctl is going away.

The use of the big kernel lock remains for now, but gets
made explictit in the ioctl function.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoscsi: fix discard page leak
FUJITA Tomonori [Thu, 8 Jul 2010 08:16:17 +0000 (10:16 +0200)]
scsi: fix discard page leak

We leak a page allocated for discard on some error conditions
(e.g. scsi_prep_state_check returns BLKPREP_DEFER in
scsi_setup_blk_pc_cmnd).

We unprep on requests that weren't prepped in the error path of
scsi_init_io. It makes the error path to clean up scsi commands messy.

Let's strictly apply the rule that we can't unprep on a request that
wasn't prepped.

Calling just scsi_put_command() in the error path of scsi_init_io() is
enough. We don't set REQ_DONTPREP yet.

scsi_setup_discard_cmnd can safely free a page on the error case with
the above rule.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: Add tracing to write_cache_pages
Dave Chinner [Wed, 7 Jul 2010 03:24:08 +0000 (13:24 +1000)]
writeback: Add tracing to write_cache_pages

Add a trace event to the ->writepage loop in write_cache_pages to give
visibility into how the ->writepage call is changing variables within the
writeback control structure. Of most interest is how wbc->nr_to_write changes
from call to call, especially with filesystems that write multiple pages
in ->writepage.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: Add tracing to balance_dirty_pages
Dave Chinner [Wed, 7 Jul 2010 03:24:07 +0000 (13:24 +1000)]
writeback: Add tracing to balance_dirty_pages

Tracing high level background writeback events is good, but it doesn't
give the entire picture. Add visibility into write throttling to catch IO
dispatched by foreground throttling of processing dirtying lots of pages.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agowriteback: Initial tracing support
Dave Chinner [Wed, 7 Jul 2010 03:24:06 +0000 (13:24 +1000)]
writeback: Initial tracing support

Trace queue/sched/exec parts of the writeback loop. This provides
insight into when and why flusher threads are scheduled to run. e.g
a sync invocation leaves traces like:

     sync-[...]: writeback_queue: bdi 8:0: sb_dev 8:1 nr_pages=7712 sync_mode=0 kupdate=0 range_cyclic=0 background=0
flush-8:0-[...]: writeback_exec: bdi 8:0: sb_dev 8:1 nr_pages=7712 sync_mode=0 kupdate=0 range_cyclic=0 background=0

This also lays the foundation for adding more writeback tracing to
provide deeper insight into the whole writeback path.

The original tracing code is from Jens Axboe, though this version is
a rewrite as a result of the code being traced changing
significantly.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblock: remove unused REQ_TYPE_LINUX_BLOCK
FUJITA Tomonori [Tue, 6 Jul 2010 07:03:18 +0000 (09:03 +0200)]
block: remove unused REQ_TYPE_LINUX_BLOCK

Nobody uses REQ_TYPE_LINUX_BLOCK (and its REQ_LB_OP_*).

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoscsi: need to reset unprep_rq_fn in sd_remove
FUJITA Tomonori [Sat, 3 Jul 2010 14:07:04 +0000 (08:07 -0600)]
scsi: need to reset unprep_rq_fn in sd_remove

This is for block's for-2.6.36.

We need to reset q->unprep_rq_fn in sd_remove. Otherwise we hit kernel
oops if we access to a scsi disk device via sg after removing scsi
disk module.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoblock: remove q->prepare_flush_fn completely
FUJITA Tomonori [Sat, 3 Jul 2010 08:45:40 +0000 (17:45 +0900)]
block: remove q->prepare_flush_fn completely

This removes q->prepare_flush_fn completely (changes the
blk_queue_ordered API).

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agoide: stop using q->prepare_flush_fn
FUJITA Tomonori [Sat, 3 Jul 2010 08:45:39 +0000 (17:45 +0900)]
ide: stop using q->prepare_flush_fn

use REQ_FLUSH flag instead.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: David S. Miller <davem@davemloft.net>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agovirtio_blk: stop using q->prepare_flush_fn
FUJITA Tomonori [Sat, 3 Jul 2010 08:45:38 +0000 (17:45 +0900)]
virtio_blk: stop using q->prepare_flush_fn

use REQ_FLUSH flag instead.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
14 years agodm: stop using q->prepare_flush_fn
FUJITA Tomonori [Sat, 3 Jul 2010 08:45:37 +0000 (17:45 +0900)]
dm: stop using q->prepare_flush_fn

use REQ_FLUSH flag instead.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Alasdair G Kergon <agk@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>