Trond Myklebust [Tue, 3 Jan 2006 08:55:38 +0000 (09:55 +0100)]
NFSv4: Ensure DELEGRETURN returns attributes
Upon return of a write delegation, the server will almost always bump the
change attribute. Ensure that we pick up that change so that we don't
invalidate our data cache unnecessarily.
Trond Myklebust [Tue, 3 Jan 2006 08:55:37 +0000 (09:55 +0100)]
NFSv4: Ensure change attribute returned by GETATTR callback conforms to spec
According to RFC3530 we're supposed to cache the change attribute
at the time the client receives a write delegation.
If the inode is clean, a CB_GETATTR callback by the server to the
client is supposed to return the cached change attribute.
If, OTOH, the inode is dirty, the client should bump the cached
change attribute by 1.
Trond Myklebust [Tue, 3 Jan 2006 08:55:34 +0000 (09:55 +0100)]
NFS: Make stat() return updated mtimes after a write()
The SuS states that a call to write() will cause mtime to be updated on
the file. In order to satisfy that requirement, we need to flush out
any cached writes in nfs_getattr().
Speed things up slightly by not committing the writes.
Chuck Lever [Wed, 30 Nov 2005 23:09:02 +0000 (18:09 -0500)]
NFS: support large reads and writes on the wire
Most NFS server implementations allow up to 64KB reads and writes on the
wire. The Solaris NFS server allows up to a megabyte, for instance.
Now the Linux NFS client supports transfer sizes up to 1MB, too. This will
help reduce protocol and context switch overhead on read/write intensive NFS
workloads, and support larger atomic read and write operations on servers
that support them.
Test-plan:
Connectathon and iozone on mount point with wsize=rsize>32768 over TCP.
Tests with NFS over UDP to verify the maximum RPC payload size cap.
Signed-off-by: Chuck Lever <cel@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Chuck Lever [Wed, 30 Nov 2005 23:08:17 +0000 (18:08 -0500)]
NFS: use generic_write_checks() to sanity check direct writes
Replace ad hoc write parameter sanity checking in nfs_file_direct_write()
with a call to generic_write_checks(). This should make the proper checks
modulo the O_LARGEFILE flag, and should catch NFSv2-specific limitations by
virtue of i_sb->s_maxbytes.
Test plan:
Posix compliance testing with both NFSv2 and NFSv3.
Signed-off-by: Chuck Lever <cel@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Trond Myklebust [Tue, 3 Jan 2006 08:55:13 +0000 (09:55 +0100)]
NFSv4: Make nfs4_state track O_RDWR, O_RDONLY and O_WRONLY separately
A closer reading of RFC3530 reveals that OPEN_DOWNGRADE must always
specify a access modes that have been the argument of a previous OPEN
operation.
IOW: doing OPEN(O_RDWR) and then OPEN_DOWNGRADE(O_WRONLY) is forbidden
unless the user called OPEN(O_WRONLY)
In order to fix that, we really need to track the three possible open
states separately.
Trond Myklebust [Tue, 3 Jan 2006 08:55:11 +0000 (09:55 +0100)]
NFSv4: Convert open() into an asynchronous RPC call
OPEN is a stateful operation, so we must ensure that it always
completes. In order to allow users to interrupt the operation,
we need to make the RPC call asynchronous, and then wait on
completion (or cancel).
Trond Myklebust [Tue, 3 Jan 2006 08:55:06 +0000 (09:55 +0100)]
NFSv4: stateful NFSv4 RPC call interface
The NFSv4 model requires us to complete all RPC calls that might
establish state on the server whether or not the user wants to
interrupt it. We may also need to schedule new work (including
new RPC calls) in order to cancel the new state.
The asynchronous RPC model will allow us to ensure that RPC calls
always complete, but in order to allow for "synchronous" RPC, we
want to add the ability to wait for completion.
The waits are, of course, interruptible.
Andrew Morton [Wed, 16 Nov 2005 23:07:01 +0000 (15:07 -0800)]
identify multipage ->writepages() calls
NFS needs to be able to distinguish between single-page ->writepage() calls and
multipage ->writepages() calls.
For the single-page writepage calls NFS can kick off the I/O within the
context of ->writepage().
For multipage ->writepages calls, nfs_writepage() will leave the I/O pending
and nfs_writepages() will kick off the I/O when it all has been queued up
within NFS.
Linus Torvalds [Fri, 6 Jan 2006 16:43:16 +0000 (08:43 -0800)]
x86: remove bogus 'pci=usepirqmask' suggestion when no irq is defined
This was harmless, but for the case of a device that had no irq
pre-defined we would incorrectly suggest that "usepirqmask" might make a
difference. It never would, and the message was just confusing people.
Jens Axboe [Fri, 6 Jan 2006 08:28:07 +0000 (09:28 +0100)]
[PATCH] Suspend support for libata
This patch adds suspend patch to libata, and ata_piix in particular. For
most low level drivers, they should just need to add the 4 hooks to
work. As I can only test ata_piix, I didn't enable it for more
though.
Suspend support is the single most important feature on a notebook, and
most new notebooks have sata drives. It's quite embarrassing that we
_still_ do not support this. Right now, it's perfectly possible to
suspend the drive in mid-transfer.
NeilBrown [Fri, 6 Jan 2006 08:20:55 +0000 (00:20 -0800)]
[PATCH] md: keep better track of dev/array size when assembling md arrays
Move the checks - that dev size is never less than array size - into
bind_rdev_to_array to make sure it always happens properly (there is one place
where currently it doesn't).
Also reject any superblock which claims an array size smaller than the device
in question can hold.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Adrian Bunk [Fri, 6 Jan 2006 08:20:44 +0000 (00:20 -0800)]
[PATCH] drivers/md/md.c: make md_new_event() static
Make the needlessly global function md_new_event() static.
Signed-off-by: Adrian Bunk <bunk@stusta.de> Cc: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:39 +0000 (00:20 -0800)]
[PATCH] md: remove inappropriate limits in md/bitmap configuration.
The kernel should not be imposing these policy limits: The time between
bitmap updates should certainly be allowed to be more than 15 seconds, and
if someone wants a bitmap chunk size in excess of 4MB, the kernel isn't the
place to stop them.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:37 +0000 (00:20 -0800)]
[PATCH] md: fix possible problem in raid1/raid10 error overwriting
The code to overwrite/reread for addressing read errors in raid1/raid10
currently assumes that the read will not alter the buffer which could be used
to write to the next device. This is not a safe assumption to make.
So we split the loops into a overwrite loop and a separate re-read loop, so
that the writing is complete before reading is attempted.
Cc: Paul Clements <paul.clements@steeleye.com> Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:36 +0000 (00:20 -0800)]
[PATCH] md: remove personality numbering from md
md supports multiple different RAID level, each being implemented by a
'personality' (which is often in a separate module).
These personalities have fairly artificial 'numbers'. The numbers
are use to:
1- provide an index into an array where the various personalities
are recorded
2- identify the module (via an alias) which implements are particular
personality.
Neither of these uses really justify the existence of personality numbers.
The array can be replaced by a linked list which is searched (array lookup
only happens very rarely). Module identification can be done using an alias
based on level rather than 'personality' number.
The current 'raid5' modules support two level (4 and 5) but only one
personality. This slight awkwardness (which was handled in the mapping from
level to personality) can be better handled by allowing raid5 to register 2
personalities.
With this change in place, the core md module does not need to have an
exhaustive list of all possible personalities, so other personalities can be
added independently.
This patch also moves the check for chunksize being non-zero into the ->run
routines for the personalities that need it, rather than having it in core-md.
This has a side effect of allowing 'faulty' and 'linear' not to have a
chunk-size set.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:30 +0000 (00:20 -0800)]
[PATCH] md: make /proc/mdstat pollable
With this patch it is possible to poll /proc/mdstat to detect arrays appearing
or disappearing, to detect failures, recovery starting, recovery completing,
and devices being added and removed.
It is similar to the poll-ability of /proc/mounts, though different in that:
We always report that the file is readable (because face it, it is, even if
only for EOF).
We report POLLPRI when there is a change so that select() can detect
it as an exceptional event. Not only are these exceptional events, but
that is the mechanism that the current 'mdadm' uses to watch for events
(It also polls after a timeout).
(We also report POLLERR like /proc/mounts).
Finally, we only reset the per-file event counter when the start of the file
is read, rather than when poll() returns an event. This is more robust as it
means that an fd will continue to report activity to poll/select until the
program clearly responds to that activity.
md_new_event takes an 'mddev' which isn't currently used, but it will be soon.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:29 +0000 (00:20 -0800)]
[PATCH] md: raid10 read-error handling - resync and read-only
Add in correct read-error handling for resync and read-only situations.
When read-only, we don't over-write, so we need to mark the failed drive in
the r10_bio so we don't re-try it. During resync, we always read all blocks,
so if there is a read error, we simply over-write it with the good block that
we found (assuming we found one).
Note that the recovery case still isn't handled in an interesting way. There
is nothing useful to do for the 2-copies case. If there are 3 or more copies,
then we could try reading from one of the non-missing copies, but this is a
bit complicated and very rarely would be used, so I'm leaving it for now.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:27 +0000 (00:20 -0800)]
[PATCH] md: make sure read error on last working drive of raid1 actually returns failure
We are inadvertently setting the R1BIO_Uptodate bit on read errors when we
decide not to try correcting (because there are no other working devices).
This means that the read error is reported to the client as success.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:26 +0000 (00:20 -0800)]
[PATCH] md: allow raid1 to check consistency
Where performing a user-requested 'check' or 'repair', we read all readable
devices, and compare the contents. We only write to blocks which had read
errors, or blocks with content that differs from the first good device found.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:24 +0000 (00:20 -0800)]
[PATCH] md: fix up some rdev rcu locking in raid5/6
There is this "FIXME" comment with a typo in it!! that been annoying me for
days, so I just had to remove it.
conf->disks[i].rdev should only be accessed if
- we know we hold a reference or
- the mddev->reconfig_sem is down or
- we have a rcu_readlock
handle_stripe was referencing rdev in three places without any of these. For
the first two, get an rcu_readlock. For the last, the same access
(md_sync_acct call) is made a little later after the rdev has been claimed
under and rcu_readlock, if R5_Syncio is set. So just use that access...
However R5_Syncio isn't really needed as the 'syncing' variable contains the
same information. So use that instead.
Issues, comment, and fix are identical in raid5 and raid6.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:22 +0000 (00:20 -0800)]
[PATCH] md: better handling for read error in raid1 during resync
Handling of read errors during resync is separate from handling of read errors
during normal IO in raid1. A previous patch added support for read errors
during normal IO. This one adds support for read errors during resync or
recovery.
The key differences are that we don't need to freeze the array, because the
normal handling of resync means that this part of the array will be idle
except for resync, and the read/overwrite/re-read is needed in a separate
piece of code.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:19 +0000 (00:20 -0800)]
[PATCH] md: attempt to auto-correct read errors in raid1
On a read-error we suspend the array, then synchronously read the block from
other arrays until we find one where we can read it. Then we try writing the
good data back everywhere and make sure it works. If any write or subsequent
read fails, only then do we fail the device out of the array.
To be able to suspend the array, we need to also keep track of how many
requests are queued for handling by raid1d.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:18 +0000 (00:20 -0800)]
[PATCH] md: improve handing of read errors with raid6
This is a simple port of match functionality across from raid5. If we get a
read error, we don't kick the drive straight away, but try to over-write with
good data first.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:17 +0000 (00:20 -0800)]
[PATCH] md: fix raid6 resync check/repair code
raid6 currently does not check the P/Q syndromes when doing a resync, it just
calculates the correct value and writes it. Doing the check can reduce writes
(often to 0) for a resync, and it is needed to properly implement the
echo check > sync_action
operation.
This patch implements the appropriate checks and tidies up some related code.
It also allows raid6 user-requested resync to bypass the intent bitmap.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:16 +0000 (00:20 -0800)]
[PATCH] md: move bitmap_create to after md array has been initialised
This is important because bitmap_create uses
mddev->resync_max_sectors
and that doesn't have a valid value until after the array
has been initialised (with pers->run()).
[It doesn't make a difference for current personalities that
support bitmaps, but will make a difference for raid10]
This has the added advantage of meaning with can move the thread->timeout
manipulation inside the bitmap.c code instead of sprinkling identical code
throughout all personalities.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:14 +0000 (00:20 -0800)]
[PATCH] md: small cleanups for raid5
Resync code:
A test that isn't needed,
a 'compute_block' that makes more sense
elsewhere (And then doesn't need a test),
a couple of BUG_ONs to confirm the change makes sense.
Printks:
A few were missing KERN_*
Also fix a typo in a comment..
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:13 +0000 (00:20 -0800)]
[PATCH] md: improve raid10 "IO Barrier" concept
raid10 needs to put up a barrier to new requests while it does resync or other
background recovery. The code for this is currently open-coded, slighty
obscure by its use of two waitqueues, and not documented.
This patch gathers all the related code into 4 functions, and includes a
comment which (hopefully) explains what is happening.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:20:12 +0000 (00:20 -0800)]
[PATCH] md: improve raid1 "IO Barrier" concept
raid1 needs to put up a barrier to new requests while it does resync or other
background recovery. The code for this is currently open-coded, slighty
obscure by its use of two waitqueues, and not documented.
This patch gathers all the related code into 4 functions, and includes a
comment which (hopefully) explains what is happening.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Darrick J. Wong [Fri, 6 Jan 2006 08:20:11 +0000 (00:20 -0800)]
[PATCH] make dm-mirror not issue invalid resync requests
I've been attempting to set up a (Host)RAID mirror with dm_mirror on
2.6.14.3, and I've been having a strange little problem. The configuration
in question is a set of 9GB SCSI disks that have 17942584 sectors. I set
up the dm_mirror table as such:
If I'm not mistaken, this sets up a 9GB RAID1 mriror with 1MB stripes
across both SCSI disks. The sector count of the dm device is less than the
size of the disks, so we shouldn't fall off the end. However, I always get
the messages like this in dmesg when I set up the dm table:
attempt to access beyond end of device
sdd: rw=0, want=17958656, limit=17942584
Clearly, something is trying to read sectors past the end of the drive. I
traced it down to the __rh_recovery_prepare function in dm-raid1.c, which
gets called when we're putting the mirror set together. This function
calls the dirty region log's get_resync_work function to see if there's any
resync that needs to be done, and queues up any areas that are out of sync.
The log's get_resync_work function is actually a pointer to the
core_get_resync_work function in dm-log.c.
The core_get_resync_work function queries a bitset lc->sync_bits to find
out if there are any regions that are out of date (i.e. the bit is 0),
which is where the problem occurs. If every bit in lc->sync_bits is 1
(which is the case when we've just configured a new RAID1 with the nosync
option), the find_next_zero_bit does NOT return the size parameter
(lc->region_count in this case), it returns the size parameter rounded up
to the nearest multiple of 32! I don't know if this is intentional, but
i386 and x86_64 both exhibit this behavior.
In any case, the statement "if (*region == lc->region_count)" looks like
it's supposed to catch the case where are no regions to resync and
return 0. Since find_next_zero_bit apparently has a habit of returning
a value that's larger than lc->region_count, the enclosed patch changes
the equality test to a greater-than test so that we don't try to resync
areas outside of the RAID1 region. Seeing as the HostRAID metadata
lives just past the end of the RAID1 data, mucking around in that area
is not a good idea.
I suppose another way to fix this would be to amend find_next_zero_bit so
that it doesn't return values larger than "size", but I don't know if
there's a reason for the current behavior.
Signed-Off-By: Darrick J. Wong <djwong@us.ibm.com> Acked-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>