This fixes klibc build error:
In file included from /klibc/usr/klibc/../include/sys/md.h:30:0,
from /klibc/usr/kinit/do_mounts_md.c:19:
/linux-next/usr/include/linux/raid/md_p.h:414:51: error: 'u32' undeclared here (not in a function)
(PPL_HEADER_SIZE - PPL_HDR_RESERVED - 4 * sizeof(u32) - sizeof(u64))
Reported-by: Greg Thelen <gthelen@google.com> Reported-by: Nigel Croxon <ncroxon@redhat.com> Tested-by: Greg Thelen <gthelen@google.com> Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Shaohua Li <shli@fb.com>
md-cluster: Fix a memleak in an error handling path
We know that 'bm_lockres' is NULL here, so 'lockres_free(bm_lockres)' is a
no-op. According to resource handling in case of error a few lines below,
it is likely that 'bitmap_free(bitmap)' was expected instead.
md: support disabling of create-on-open semantics.
md allows a new array device to be created by simply
opening a device file. This make it difficult to
remove the device and udev is likely to open the device file
as part of processing the REMOVE event.
There is an alternate mechanism for creating arrays
by writing to the new_array module parameter.
When using tools that work with this parameter, it is
best to disable the old semantics.
This new module parameter allows that.
Signed-off-by: NeilBrown <neilb@suse.com> Acted-by: Coly Li <colyli@suse.de> Signed-off-by: Shaohua Li <shli@fb.com>
md: allow creation of mdNNN arrays via md_mod/parameters/new_array
The intention when creating the "new_array" parameter and the
possibility of having array names line "md_HOME" was to transition
away from the old way of creating arrays and to eventually only use
this new way.
The "old" way of creating array is to create a device node in /dev
and then open it. The act of opening creates the array.
This is problematic because sometimes the device node can be opened
when we don't want to create an array. This can easily happen
when some rule triggered by udev looks at a device as it is being
destroyed. The node in /dev continues to exist for a short period
after an array is stopped, and opening it during this time recreates
the array (as an inactive array).
Unfortunately no clear plan for the transition was created. It is now
time to fix that.
This patch allows devices with numeric names, like "md999" to be
created by writing to "new_array". This will only work if the minor
number given is not already in use. This will allow mdadm to
support the creation of arrays with numbers > 511 (currently not
possible) by writing to new_array.
mdadm can, at some point, use this approach to create *all* arrays,
which will allow the transition to only using the new-way.
Signed-off-by: NeilBrown <neilb@suse.com> Acted-by: Coly Li <colyli@suse.de> Signed-off-by: Shaohua Li <shli@fb.com>
raid5-ppl: use a single mempool for ppl_io_unit and header_page
Allocate both struct ppl_io_unit and its header_page from a shared
mempool to avoid a possible deadlock. Implement allocate and free
functions for the mempool, remove the second pool for allocating
header_page. The header_pages are now freed with their io_units, not
when the ppl bio completes. Also, use GFP_NOWAIT instead of GFP_ATOMIC
for allocating ppl_io_unit because we can handle failed allocations and
there is no reason to utilize emergency reserves.
Suggested-by: NeilBrown <neilb@suse.com> Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Shaohua Li <shli@fb.com>
linear_make_request() uses fs_bio_set, which is meant for filesystems
to use, and loops, possible allocating from the same bio set multiple
times.
These behaviors can theoretically cause deadlocks, though as
linear requests are hardly ever split, it is unlikely in practice.
Change to use mddev->bio_set - otherwise unused for linear, and submit
the tail of a split request to generic_make_request() for it to
handle.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
md/raid5: make chunk_aligned_read() split bios more cleanly.
chunk_aligned_read() currently uses fs_bio_set - which is meant for
filesystems to use - and loops if multiple splits are needed, which is
not best practice.
As this is only used for READ requests, not writes, it is unlikely
to cause a problem. However it is best to be consistent in how
we split bios, and to follow the pattern used in raid1/raid10.
So create a private bioset, bio_split, and use it to perform a single
split, submitting the remainder to generic_make_request() for later
processing.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
handle_read_error() duplicates a lot of the work that raid10_read_request()
does, so it makes sense to just use that function.
handle_read_error() relies on the same r10bio being re-used so that,
in the case of a read-only array, setting IO_BLOCKED in r1bio->devs[].bio
ensures read_balance() won't re-use that device.
So when called from raid10_make_request() we clear that array, but not
when called from handle_read_error().
Two parts of handle_read_error() that need to be preserved are the warning
message it prints, so they are conditionally added to
raid10_read_request(). If the failing rdev can be found, messages
are printed. Otherwise they aren't.
Not that as rdev_dec_pending() has already been called on the failing
rdev, we need to use rcu_read_lock() to get a new reference from
the conf. We only use this to get the name of the failing block device.
With this change, we no longer need inc_pending().
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
raid10 splits requests in two different ways for two different
reasons.
First, bio_split() is used to ensure the bio fits with a chunk.
Second, multiple r10bio structures are allocated to represent the
different sections that need to go to different devices, to avoid
known bad blocks.
This can be simplified to just use bio_split() once, and not to use
multiple r10bios.
We delay the split until we know a maximum bio size that can
be handled with a single r10bio, and then split the bio and queue
the remainder for later handling.
As with raid1, we allocate a new bio_set to help with the splitting.
It is not correct to use fs_bio_set in a device driver.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
flush_pending_writes() and raid1_unplug() each contain identical
copies of a fairly large slab of code. So factor that out into
new flush_bio_list() to simplify maintenance.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
handle_read_error() duplicates a lot of the work that raid1_read_request()
does, so it makes sense to just use that function.
This doesn't quite work as handle_read_error() relies on the same r1bio
being re-used so that, in the case of a read-only array, setting
IO_BLOCKED in r1bio->bios[] ensures read_balance() won't re-use
that device.
So we need to allow a r1bio to be passed to raid1_read_request(), and to
have that function mostly initialise the r1bio, but leave the bios[]
array untouched.
Two parts of handle_read_error() that need to be preserved are the warning
message it prints, so they are conditionally added to raid1_read_request().
Note that this highlights a minor bug on alloc_r1bio(). It doesn't
initalise the bios[] array, so it is possible that old content is there,
which might cause read_balance() to ignore some devices with no good reason.
With this change, we no longer need inc_pending(), or the sectors_handled
arg to alloc_r1bio().
As handle_read_error() is called from raid1d() and allocates memory,
there is tiny chance of a deadlock. All element of various pools
could be queued waiting for raid1 to handle them, and there may be no
extra memory free.
Achieving guaranteed forward progress would probably require a second
thread and another mempool. Instead of that complexity, add
__GFP_HIGH to any allocations when read1_read_request() is called
from raid1d.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
Now that we always always pass an offset of 0 and a size
that matches the bio to alloc_behind_master_bio(),
we can remove the offset/size args and simplify the code.
We could probably remove bio_copy_data_partial() too.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
raid1 currently splits requests in two different ways for
two different reasons.
First, bio_split() is used to ensure the bio fits within a
resync accounting region.
Second, multiple r1bios are allocated for each bio to handle
the possiblity of known bad blocks on some devices.
This can be simplified to just use bio_split() once, and not
use multiple r1bios.
We delay the split until we know a maximum bio size that can
be handled with a single r1bio, and then split the bio and
queue the remainder for later handling.
This avoids all loops inside raid1.c request handling. Just
a single read, or a single set of writes, is submitted to
lower-level devices for each bio that comes from
generic_make_request().
When the bio needs to be split, generic_make_request() will
do the necessary looping and call md_make_request() multiple
times.
raid1_make_request() no longer queues request for raid1 to handle,
so we can remove that branch from the 'if'.
This patch also creates a new private bio_set
(conf->bio_split) for splitting bios. Using fs_bio_set
is wrong, as it is meant to be used by filesystems, not
block devices. Using it inside md can lead to deadlocks
under high memory pressure.
Delete unused variable in raid1_write_request() (Shaohua) Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
In case of read-modify-write, partial partity is the same as the result
of ops_run_prexor5(), so we can just copy sh->dev[pd_idx].page into
sh->ppl_page instead of calculating it again.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Shaohua Li <shli@fb.com>
raid5-ppl: use resize_stripes() when enabling or disabling ppl
Use resize_stripes() instead of raid5_reset_stripe_cache() to allocate
or free sh->ppl_page at runtime for all stripes in the stripe cache.
raid5_reset_stripe_cache() required suspending the mddev and could
deadlock because of GFP_KERNEL allocations.
Move the 'newsize' check to check_reshape() to allow reallocating the
stripes with the same number of disks. Allocate sh->ppl_page in
alloc_stripe() instead of grow_buffers(). Pass 'struct r5conf *conf' as
a parameter to alloc_stripe() because it is needed to check whether to
allocate ppl_page. Add free_stripe() and use it to free stripes rather
than directly call kmem_cache_free(). Also free sh->ppl_page in
free_stripe().
Set MD_HAS_PPL at the end of ppl_init_log() instead of explicitly
setting it in advance and add another parameter to log_init() to allow
calling ppl_init_log() without the bit set. Don't try to calculate
partial parity or add a stripe to log if it does not have ppl_page set.
Enabling ppl can now be performed without suspending the mddev, because
the log won't be used until new stripes are allocated with ppl_page.
Calling mddev_suspend/resume is still necessary when disabling ppl,
because we want all stripes to finish before stopping the log, but
resize_stripes() can be called after mddev_resume() when ppl is no
longer active.
Suggested-by: NeilBrown <neilb@suse.com> Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Shaohua Li <shli@fb.com>
Use a single no_mem_stripes list instead of per member device lists for
handling stripes that need retrying in case of failed io_unit
allocation. Because io_units are allocated from a memory pool shared
between all member disks, the no_mem_stripes list should be checked when
an io_unit for any member is freed. This fixes a deadlock that could
happen if there are stripes in more than one no_mem_stripes list.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Shaohua Li <shli@fb.com>
md/raid1: avoid reusing a resync bio after error handling.
fix_sync_read_error() modifies a bio on a newly faulty
device by setting bi_end_io to end_sync_write.
This ensure that put_buf() will still call rdev_dec_pending()
as required, but makes sure that subsequent code in
fix_sync_read_error() doesn't try to read from the device.
Unfortunately this interacts badly with sync_request_write()
which assumes that any bio with bi_end_io set to non-NULL
other than end_sync_read is safe to write to.
As the device is now faulty it doesn't make sense to write.
As the bio was recently used for a read, it is "dirty"
and not suitable for immediate submission.
In particular, ->bi_next might be non-NULL, which will cause
generic_make_request() to complain.
Break this interaction by refusing to write to devices
which are marked as Faulty.
Reported-and-tested-by: Michael Wang <yun.wang@profitbricks.com> Fixes: 2e52d449bcec ("md/raid1: add failfast handling for reads.") Cc: stable@vger.kernel.org (v4.10+) Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
Zhilong Liu [Mon, 10 Apr 2017 06:15:55 +0000 (14:15 +0800)]
md.c:didn't unlock the mddev before return EINVAL in array_size_store
md.c: it needs to release the mddev lock before
the array_size_store() returns.
Fixes: ab5a98b132fd ("md-cluster: change array_sectors and update size are not supported") Signed-off-by: Zhilong Liu <zlliu@suse.com> Reviewed-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
md: MD_CLOSING needs to be cleared after called md_set_readonly or do_md_stop
if called md_set_readonly and set MD_CLOSING bit, the mddev cannot
be opened any more due to the MD_CLOING bit wasn't cleared. Thus it
needs to be cleared in md_ioctl after any call to md_set_readonly()
or do_md_stop().
Signed-off-by: NeilBrown <neilb@suse.com> Fixes: af8d8e6f0315 ("md: changes for MD_STILL_CLOSED flag") Cc: stable@vger.kernel.org (v4.9+) Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
We need to set "first = 0' at the end of rdev_for_each
loop, so we can get the array's min_offset_diff correctly
otherwise min_offset_diff just means the last rdev's
offset diff.
md/raid6: Fix anomily when recovering a single device in RAID6.
When recoverying a single missing/failed device in a RAID6,
those stripes where the Q block is on the missing device are
handled a bit differently. In these cases it is easy to
check that the P block is correct, so we do. This results
in the P block be destroy. Consequently the P block needs
to be read a second time in order to compute Q. This causes
lots of seeks and hurts performance.
It shouldn't be necessary to re-read P as it can be computed
from the DATA. But we only compute blocks on missing
devices, since c337869d9501 ("md: do not compute parity
unless it is on a failed drive").
So relax the change made in that commit to allow computing
of the P block in a RAID6 which it is the only missing that
block.
This makes RAID6 recovery run much faster as the disk just
"before" the recovering device is no longer seeking
back-and-forth.
Reported-by-tested-by: Brad Campbell <lists2009@fnarfbargle.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
Dennis Yang [Wed, 29 Mar 2017 07:46:13 +0000 (15:46 +0800)]
md: update slab_cache before releasing new stripes when stripes resizing
When growing raid5 device on machine with small memory, there is chance that
mdadm will be killed and the following bug report can be observed. The same
bug could also be reproduced in linux-4.10.6.
The problem is that resize_stripes() releases new stripe_heads before assigning new
slab cache to conf->slab_cache. If the shrinker function raid5_cache_scan() gets called
after resize_stripes() starting releasing new stripes but right before new slab cache
being assigned, it is possible that these new stripe_heads will be freed with the old
slab_cache which was already been destoryed and that triggers this bug.
Signed-off-by: Dennis Yang <dennisyang@qnap.com> Fixes: edbe83ab4c27 ("md/raid5: allow the stripe_cache to grow and shrink.") Cc: stable@vger.kernel.org (4.1+) Reviewed-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
Ming Lei [Tue, 28 Mar 2017 08:17:55 +0000 (16:17 +0800)]
md: raid1: kill warning on powerpc_pseries
This patch kills the warning reported on powerpc_pseries,
and actually we don't need the initialization.
After merging the md tree, today's linux-next build (powerpc
pseries_le_defconfig) produced this warning:
drivers/md/raid1.c: In function 'raid1d':
drivers/md/raid1.c:2172:9: warning: 'page_len$' may be used uninitialized in this function [-Wmaybe-uninitialized]
if (memcmp(page_address(ppages[j]),
^
drivers/md/raid1.c:2160:7: note: 'page_len$' was declared here
int page_len[RESYNC_PAGES];
^
Signed-off-by: Ming Lei <tom.leiming@gmail.com> Signed-off-by: Shaohua Li <shli@fb.com>
Song Liu [Mon, 27 Mar 2017 17:51:33 +0000 (10:51 -0700)]
md/raid5: use consistency_policy to remove journal feature
When journal device of an array fails, the array is forced into read-only
mode. To make the array normal without adding another journal device, we
need to remove journal _feature_ from the array.
This patch allows remove journal _feature_ from an array, For journal
existing journal should be either missing or faulty.
To remove journal feature, it is necessary to remove the journal device
first:
mdadm --fail /dev/md0 /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
mdadm --remove /dev/md0 /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md0
Then the journal feature can be removed by echoing into the sysfs file:
Shaohua Li [Fri, 24 Mar 2017 22:20:47 +0000 (15:20 -0700)]
md/raid1: skip data copy for behind io for discard request
discard request doesn't have data attached, so it's meaningless to
allocate memory and copy from original bio for behind IO. And the copy
is bogus because bio_copy_data_partial can't handle discard request.
We don't support writesame/writezeros request so far.
Reviewed-by: Ming Lei <tom.leiming@gmail.com> Signed-off-by: Shaohua Li <shli@fb.com>
Shaohua Li [Fri, 24 Mar 2017 17:34:43 +0000 (10:34 -0700)]
block: remove bio_clone_bioset_partial()
commit c18a1e0(block: introduce bio_clone_bioset_partial()) introduced
bio_clone_bioset_partial() for raid1 write behind IO. Now the write behind is
rewritten by Ming. We don't need the API any more, so revert the commit.
Cc: Christoph Hellwig <hch@lst.de> Reviewed-by: Jens Axboe <axboe@fb.com> Reviewed-by: Ming Lei <tom.leiming@gmail.com> Signed-off-by: Shaohua Li <shli@fb.com>
Ming Lei [Thu, 16 Mar 2017 16:12:33 +0000 (00:12 +0800)]
md: raid10: don't use bio's vec table to manage resync pages
Now we allocate one page array for managing resync pages, instead
of using bio's vec table to do that, and the old way is very hacky
and won't work any more if multipage bvec is enabled.
The introduced cost is that we need to allocate (128 + 16) * copies
bytes per r10_bio, and it is fine because the inflight r10_bio for
resync shouldn't be much, as pointed by Shaohua.
Also bio_reset() in raid10_sync_request() and reshape_request()
are removed because all bios are freshly new now in these functions
and not necessary to reset any more.
This patch can be thought as cleanup too.
Suggested-by: Shaohua Li <shli@kernel.org> Signed-off-by: Ming Lei <tom.leiming@gmail.com> Signed-off-by: Shaohua Li <shli@fb.com>
Ming Lei [Thu, 16 Mar 2017 16:12:31 +0000 (00:12 +0800)]
md: raid1: improve write behind
This patch improve handling of write behind in the following ways:
- introduce behind master bio to hold all write behind pages
- fast clone bios from behind master bio
- avoid to change bvec table directly
- use bio_copy_data() and make code more clean
Suggested-by: Shaohua Li <shli@fb.com> Signed-off-by: Ming Lei <tom.leiming@gmail.com> Signed-off-by: Shaohua Li <shli@fb.com>
Ming Lei [Thu, 16 Mar 2017 16:12:29 +0000 (00:12 +0800)]
block: introduce bio_copy_data_partial
Turns out we can use bio_copy_data in raid1's write behind,
and we can make alloc_behind_pages() more clean/efficient,
but we need to partial version of bio_copy_data().
Signed-off-by: Ming Lei <tom.leiming@gmail.com> Reviewed-by: Jens Axboe <axboe@fb.com> Signed-off-by: Shaohua Li <shli@fb.com>
Ming Lei [Thu, 16 Mar 2017 16:12:26 +0000 (00:12 +0800)]
md: raid1: don't use bio's vec table to manage resync pages
Now we allocate one page array for managing resync pages, instead
of using bio's vec table to do that, and the old way is very hacky
and won't work any more if multipage bvec is enabled.
The introduced cost is that we need to allocate (128 + 16) * raid_disks
bytes per r1_bio, and it is fine because the inflight r1_bio for
resync shouldn't be much, as pointed by Shaohua.
Also the bio_reset() in raid1_sync_request() is removed because
all bios are freshly new now and not necessary to reset any more.
This patch can be thought as a cleanup too
Suggested-by: Shaohua Li <shli@kernel.org> Signed-off-by: Ming Lei <tom.leiming@gmail.com> Signed-off-by: Shaohua Li <shli@fb.com>
Ming Lei [Thu, 16 Mar 2017 16:12:22 +0000 (00:12 +0800)]
md: raid1/raid10: don't handle failure of bio_add_page()
All bio_add_page() is for adding one page into resync bio,
which is big enough to hold RESYNC_PAGES pages, and
the current bio_add_page() doesn't check queue limit any more,
so it won't fail at all.
remove unused label (shaohua)
Signed-off-by: Ming Lei <tom.leiming@gmail.com> Signed-off-by: Shaohua Li <shli@fb.com>
Guoqing Jiang [Mon, 20 Mar 2017 09:46:04 +0000 (17:46 +0800)]
md/raid10: refactor some codes from raid10_write_request
Previously, we clone both bio and repl_bio in raid10_write_request,
then add the cloned bio to plug->pending or conf->pending_bio_list
based on plug or not, and most of the logics are same for the two
conditions.
So introduce raid10_write_one_disk for it, and use replacement parameter
to distinguish the difference. No functional changes in the patch.
Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
NeilBrown [Wed, 15 Mar 2017 03:05:14 +0000 (14:05 +1100)]
MD: use per-cpu counter for writes_pending
The 'writes_pending' counter is used to determine when the
array is stable so that it can be marked in the superblock
as "Clean". Consequently it needs to be updated frequently
but only checked for zero occasionally. Recent changes to
raid5 cause the count to be updated even more often - once
per 4K rather than once per bio. This provided
justification for making the updates more efficient.
So we replace the atomic counter a percpu-refcount.
This can be incremented and decremented cheaply most of the
time, and can be switched to "atomic" mode when more
precise counting is needed. As it is possible for multiple
threads to want a precise count, we introduce a
"sync_checker" counter to count the number of threads
in "set_in_sync()", and only switch the refcount back
to percpu mode when that is zero.
We need to be careful about races between set_in_sync()
setting ->in_sync to 1, and md_write_start() setting it
to zero. md_write_start() holds the rcu_read_lock()
while checking if the refcount is in percpu mode. If
it is, then we know a switch to 'atomic' will not happen until
after we call rcu_read_unlock(), in which case set_in_sync()
will see the elevated count, and not set in_sync to 1.
If it is not in percpu mode, we take the mddev->lock to
ensure proper synchronization.
It is no longer possible to quickly check if the count is zero, which
we previously did to update a timer or to schedule the md_thread.
So now we do these every time we decrement that counter, but make
sure they are fast.
mod_timer() already optimizes the case where the timeout value doesn't
actually change. We leverage that further by always rounding off the
jiffies to the timeout value. This may delay the marking of 'clean'
slightly, but ensure we only perform atomic operation here when absolutely
needed.
md_wakeup_thread() current always calls wake_up(), even if
THREAD_WAKEUP is already set. That too can be optimised to avoid
calls to wake_up().
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
NeilBrown [Wed, 15 Mar 2017 03:05:14 +0000 (14:05 +1100)]
percpu-refcount: support synchronous switch to atomic mode.
percpu_ref_switch_to_atomic_sync() schedules the switch to atomic mode, then
waits for it to complete.
Also export percpu_ref_switch_to_* so they can be used from modules.
This will be used in md/raid to count the number of pending write
requests to an array.
We occasionally need to check if the count is zero, but most often
we don't care.
We always want updates to the counter to be fast, as in some cases
we count every 4K page.
NeilBrown [Wed, 15 Mar 2017 03:05:14 +0000 (14:05 +1100)]
md: close a race with setting mddev->in_sync
If ->in_sync is being set just as md_write_start() is being called,
it is possible that set_in_sync() won't see the elevated
->writes_pending, and md_write_start() won't see the set ->in_sync.
To close this race, re-test ->writes_pending after setting ->in_sync,
and add memory barriers to ensure the increment of ->writes_pending
will be seen by the time of this second test, or the new ->in_sync
will be seen by md_write_start().
Add a spinlock to array_state_show() to ensure this temporary
instability is never visible from userspace.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
NeilBrown [Wed, 15 Mar 2017 03:05:14 +0000 (14:05 +1100)]
md: factor out set_in_sync()
Three separate places in md.c check if the number of active
writes is zero and, if so, sets mddev->in_sync.
There are a few differences, but there shouldn't be:
- it is always appropriate to notify the change in
sysfs_state, and there is no need to do this outside a
spin-locked region.
- we never need to check ->recovery_cp. The state of resync
is not relevant for whether there are any pending writes
or not (which is what ->in_sync reports).
So create set_in_sync() which does the correct tests and
makes the correct changes, and call this in all three
places.
Any behaviour changes here a minor and cosmetic.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
NeilBrown [Wed, 15 Mar 2017 03:05:14 +0000 (14:05 +1100)]
md/raid1: stop using bi_phys_segment
Change to use bio->__bi_remaining to count number of r1bio attached
to a bio.
See precious raid10 patch for more details.
Like the raid10.c patch, this fixes a bug as nr_queued and nr_pending
used to measure different things, but were being compared.
This patch fixes another bug in that nr_pending previously did not
could write-behind requests, so behind writes could continue while
resync was happening. How that nr_pending counts all r1_bio,
the resync cannot commence until the behind writes have completed.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
NeilBrown [Wed, 15 Mar 2017 03:05:13 +0000 (14:05 +1100)]
md/raid10: stop using bi_phys_segments
raid10 currently repurposes bi_phys_segments on each
incoming bio to count how many r10bio was used to encode the
request.
We need to know when the number of attached r10bio reaches
zero to:
1/ call bio_endio() when all IO on the bio is finished
2/ decrement ->nr_pending so that resync IO can proceed.
Now that the bio has its own __bi_remaining counter, that
can be used instead. We can call bio_inc_remaining to
increment the counter and call bio_endio() every time an
r10bio completes, rather than only when bi_phys_segments
reaches zero.
This addresses point 1, but not point 2. bio_endio()
doesn't (and cannot) report when the last r10bio has
finished, so a different approach is needed.
So: instead of counting bios in ->nr_pending, count r10bios.
i.e. every time we attach a bio, increment nr_pending.
Every time an r10bio completes, decrement nr_pending.
Normally we only increment nr_pending after first checking
that ->barrier is zero, or some other non-trivial tests and
possible waiting. When attaching multiple r10bios to a bio,
we only need the tests and the waiting once. After the
first increment, subsequent increments can happen
unconditionally as they are really all part of the one
request.
So introduce inc_pending() which can be used when we know
that nr_pending is already elevated.
Note that this fixes a bug. freeze_array() contains the line
atomic_read(&conf->nr_pending) == conf->nr_queued+extra,
which implies that the units for ->nr_pending, ->nr_queued and extra
are the same.
->nr_queue and extra count r10_bios, but prior to this patch,
->nr_pending counted bios. If a bio ever resulted in multiple
r10_bios (due to bad blocks), freeze_array() would not work correctly.
Now it does.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
NeilBrown [Wed, 15 Mar 2017 03:05:13 +0000 (14:05 +1100)]
md/raid1, raid10: move rXbio accounting closer to allocation.
When raid1 or raid10 find they will need to allocate a new
r1bio/r10bio, in order to work around a known bad block, they
account for the allocation well before the allocation is
made. This separation makes the correctness less obvious
and requires comments.
The accounting needs to be a little before: before the first
rXbio is submitted, but that is all.
So move the accounting down to where it makes more sense.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
NeilBrown [Wed, 15 Mar 2017 03:05:13 +0000 (14:05 +1100)]
md/raid5: remove over-loading of ->bi_phys_segments.
When a read request, which bypassed the cache, fails, we need to retry
it through the cache.
This involves attaching it to a sequence of stripe_heads, and it may not
be possible to get all the stripe_heads we need at once.
We do what we can, and record how far we got in ->bi_phys_segments so
we can pick up again later.
There is only ever one bio which may have a non-zero offset stored in
->bi_phys_segments, the one that is either active in the single thread
which calls retry_aligned_read(), or is in conf->retry_read_aligned
waiting for retry_aligned_read() to be called again.
So we only need to store one offset value. This can be in a local
variable passed between remove_bio_from_retry() and
retry_aligned_read(), or in the r5conf structure next to the
->retry_read_aligned pointer.
Storing it there allows the last usage of ->bi_phys_segments to be
removed from md/raid5.c.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
NeilBrown [Wed, 15 Mar 2017 03:05:13 +0000 (14:05 +1100)]
md/raid5: use bio_inc_remaining() instead of repurposing bi_phys_segments as a counter
md/raid5 needs to keep track of how many stripe_heads are processing a
bio so that it can delay calling bio_endio() until all stripe_heads
have completed. It currently uses 16 bits of ->bi_phys_segments for
this purpose.
16 bits is only enough for 256M requests, and it is possible for a
single bio to be larger than this, which causes problems. Also, the
bio struct contains a larger counter, __bi_remaining, which has a
purpose very similar to the purpose of our counter. So stop using
->bi_phys_segments, and instead use __bi_remaining.
This means we don't need to initialize the counter, as our caller
initializes it to '1'. It also means we can call bio_endio() directly
as it tests this counter internally.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
NeilBrown [Wed, 15 Mar 2017 03:05:12 +0000 (14:05 +1100)]
md/raid5: call bio_endio() directly rather than queueing for later.
We currently gather bios that need to be returned into a bio_list
and call bio_endio() on them all together.
The original reason for this was to avoid making the calls while
holding a spinlock.
Locking has changed a lot since then, and that reason is no longer
valid.
So discard return_io() and various return_bi lists, and just call
bio_endio() directly as needed.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
NeilBrown [Wed, 15 Mar 2017 03:05:12 +0000 (14:05 +1100)]
md/raid5: simplfy delaying of writes while metadata is updated.
If a device fails during a write, we must ensure the failure is
recorded in the metadata before the completion of the write is
acknowleged.
Commit c3cce6cda162 ("md/raid5: ensure device failure recorded before
write request returns.") added code for this, but it was
unnecessarily complicated. We already had similar functionality for
handling updates to the bad-block-list, thanks to Commit de393cdea66c
("md: make it easier to wait for bad blocks to be acknowledged.")
So revert most of the former commit, and instead avoid collecting
completed writes if MD_CHANGE_PENDING is set. raid5d() will then flush
the metadata and retry the stripe_head.
As this change can leave a stripe_head ready for handling immediately
after handle_active_stripes() returns, we change raid5_do_work() to
pause when MD_CHANGE_PENDING is set, so that it doesn't spin.
We check MD_CHANGE_PENDING *after* analyse_stripe() as it could be set
asynchronously. After analyse_stripe(), we have collected stable data
about the state of devices, which will be used to make decisions.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
NeilBrown [Wed, 15 Mar 2017 03:05:12 +0000 (14:05 +1100)]
md/raid5: use md_write_start to count stripes, not bios
We use md_write_start() to increase the count of pending writes, and
md_write_end() to decrement the count. We currently count bios
submitted to md/raid5. Change it count stripe_heads that a WRITE bio
has been attached to.
So now, raid5_make_request() calls md_write_start() and then
md_write_end() to keep the count elevated during the setup of the
request.
add_stripe_bio() calls md_write_start() for each stripe_head, and the
completion routines always call md_write_end(), instead of only
calling it when raid5_dec_bi_active_stripes() returns 0.
make_discard_request also calls md_write_start/end().
The parallel between md_write_{start,end} and use of bi_phys_segments
can be seen in that:
Whenever we set bi_phys_segments to 1, we now call md_write_start.
Whenever we increment it on non-read requests with
raid5_inc_bi_active_stripes(), we now call md_write_start().
Whenever we decrement bi_phys_segments on non-read requsts with
raid5_dec_bi_active_stripes(), we now call md_write_end().
This reduces our dependence on keeping a per-bio count of active
stripes in bi_phys_segments.
md_write_inc() is added which parallels md_write_start(), but requires
that a write has already been started, and is certain never to sleep.
This can be used inside a spinlocked region when adding to a write
request.
Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
Guoqing Jiang [Tue, 14 Mar 2017 01:40:20 +0000 (09:40 +0800)]
md: move bitmap_destroy to the beginning of __md_stop
Since we have switched to sync way to handle METADATA_UPDATED
msg for md-cluster, then process_metadata_update is depended
on mddev->thread->wqueue.
With the new change, clustered raid could possible hang if
array received a METADATA_UPDATED msg after array unregistered
mddev->thread, so we need to stop clustered raid (bitmap_destroy
-> bitmap_free -> md_cluster_stop) earlier than unregister
thread (mddev_detach -> md_unregister_thread).
And this change should be safe for non-clustered raid since
all writes are stopped before the destroy. Also in md_run,
we activate the personality (pers->run()) before activating
the bitmap (bitmap_create()). So it is pleasingly symmetric
to stop the bitmap (bitmap_destroy()) before stopping the
personality (__md_stop() calls pers->free()), we achieve this
by move bitmap_destroy to the beginning of __md_stop.
But we don't want to break the codes for waiting behind IO as
Shaohua mentioned, so introduce bitmap_wait_behind_writes to
call the codes, and call the new fun in both mddev_detach and
bitmap_destroy, then we will not break original behind IO code
and also fit the new condition well.
Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: Shaohua Li <shli@fb.com>
Song Liu [Fri, 10 Mar 2017 05:23:39 +0000 (21:23 -0800)]
md/r5cache: generate R5LOG_PAYLOAD_FLUSH
In r5c_finish_stripe_write_out(), R5LOG_PAYLOAD_FLUSH is append to
log->current_io.
Appending R5LOG_PAYLOAD_FLUSH in quiesce needs extra writes to
journal. To simplify the logic, we just skip R5LOG_PAYLOAD_FLUSH in
quiesce.
Even R5LOG_PAYLOAD_FLUSH supports multiple stripes per payload.
However, current implementation is one stripe per R5LOG_PAYLOAD_FLUSH,
which is simpler.
Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Shaohua Li <shli@fb.com>
Song Liu [Wed, 8 Mar 2017 01:44:21 +0000 (17:44 -0800)]
md/r5cache: handle R5LOG_PAYLOAD_FLUSH in recovery
This patch adds handling of R5LOG_PAYLOAD_FLUSH in journal recovery.
Next patch will add logic that generate R5LOG_PAYLOAD_FLUSH on flush
finish.
When R5LOG_PAYLOAD_FLUSH is seen in recovery, pending data and parity
will be dropped from recovery. This will reduce the number of stripes
to replay, and thus accelerate the recovery process.
Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Shaohua Li <shli@fb.com>
Allow writing to 'consistency_policy' attribute when the array is
active. Add a new function 'change_consistency_policy' to the
md_personality operations structure to handle the change in the
personality code. Values "ppl" and "resync" are accepted and
turn PPL on and off respectively.
When enabling PPL its location and size should first be set using
'ppl_sector' and 'ppl_size' attributes and a valid PPL header should be
written at this location on each member device.
Enabling or disabling PPL is performed under a suspended array. The
raid5_reset_stripe_cache function frees the stripe cache and allocates
it again in order to allocate or free the ppl_pages for the stripes in
the stripe cache.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Shaohua Li <shli@fb.com>
Add a function to modify the log by removing an rdev when a drive fails
or adding when a spare/replacement is activated as a raid member.
Removing a disk just clears the child log rdev pointer. No new stripes
will be accepted for this child log in ppl_write_stripe() and running io
units will be processed without writing PPL to the device.
Adding a disk sets the child log rdev pointer and writes an empty PPL
header.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Shaohua Li <shli@fb.com>
Load the log from each disk when starting the array and recover if the
array is dirty.
The initial empty PPL is written by mdadm. When loading the log we
verify the header checksum and signature. For external metadata arrays
the signature is verified in userspace, so here we read it from the
header, verifying only if it matches on all disks, and use it later when
writing PPL.
In addition to the header checksum, each header entry also contains a
checksum of its partial parity data. If the header is valid, recovery is
performed for each entry until an invalid entry is found. If the array
is not degraded and recovery using PPL fully succeeds, there is no need
to resync the array because data and parity will be consistent, so in
this case resync will be disabled.
Due to compatibility with IMSM implementations on other systems, we
can't assume that the recovery data block size is always 4K. Writes
generated by MD raid5 don't have this issue, but when recovering PPL
written in other environments it is possible to have entries with
512-byte sector granularity. The recovery code takes this into account
and also the logical sector size of the underlying drives.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Shaohua Li <shli@fb.com>
Add 'consistency_policy' attribute for array. It indicates how the array
maintains consistency in case of unexpected shutdown.
Add 'ppl_sector' and 'ppl_size' for rdev, which describe the location
and size of the PPL space on the device. They can't be changed for
active members if the array is started and PPL is enabled, so in the
setter functions only basic checks are performed. More checks are done
in ppl_validate_rdev() when starting the log.
These attributes are writable to allow enabling PPL for external
metadata arrays and (later) to enable/disable PPL for a running array.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Shaohua Li <shli@fb.com>
Implement the calculation of partial parity for a stripe and PPL write
logging functionality. The description of PPL is added to the
documentation. More details can be found in the comments in raid5-ppl.c.
Attach a page for holding the partial parity data to stripe_head.
Allocate it only if mddev has the MD_HAS_PPL flag set.
Partial parity is the xor of not modified data chunks of a stripe and is
calculated as follows:
- reconstruct-write case:
xor data from all not updated disks in a stripe
- read-modify-write case:
xor old data and parity from all updated disks in a stripe
Implement it using the async_tx API and integrate into raid_run_ops().
It must be called when we still have access to old data, so do it when
STRIPE_OP_BIODRAIN is set, but before ops_run_prexor5(). The result is
stored into sh->ppl_page.
Partial parity is not meaningful for full stripe write and is not stored
in the log or used for recovery, so don't attempt to calculate it when
stripe has STRIPE_FULL_WRITE.
Put the PPL metadata structures to md_p.h because userspace tools
(mdadm) will also need to read/write PPL.
Warn about using PPL with enabled disk volatile write-back cache for
now. It can be removed once disk cache flushing before writing PPL is
implemented.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Shaohua Li <shli@fb.com>
Move raid5-cache declarations from raid5.h to raid5-log.h, add inline
wrappers for functions which will be shared with ppl and use them in
raid5 core instead of direct calls to raid5-cache.
Remove unused parameter from r5c_cache_data(), move two duplicated
pr_debug() calls to r5l_init_log().
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Shaohua Li <shli@fb.com>
Include information about PPL location and size into mdp_superblock_1
and copy it to/from rdev. Because PPL is mutually exclusive with bitmap,
put it in place of 'bitmap_offset'. Add a new flag MD_FEATURE_PPL for
'feature_map', analogically to MD_FEATURE_BITMAP_OFFSET. Add MD_HAS_PPL
to mddev->flags to indicate that PPL is enabled on an array.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Signed-off-by: Shaohua Li <shli@fb.com>
Song Liu [Wed, 8 Mar 2017 00:49:17 +0000 (16:49 -0800)]
md/r5cache: improve recovery with read ahead page pool
In r5cache recovery, the journal device is scanned page by page.
Currently, we use sync_page_io() to read journal device. This is
not efficient when we have to recovery many stripes from the journal.
To improve the speed of recovery, this patch introduces a read ahead
page pool (ra_pool) to recovery_ctx. With ra_pool, multiple consecutive
pages are read in one IO. Then the recovery code read the journal from
ra_pool.
With ra_pool, r5l_recovery_ctx has become much bigger. Therefore,
r5l_recovery_log() is refactored so r5l_recovery_ctx is not using
stack space.
Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Shaohua Li <shli@fb.com>
Shaohua Li [Sat, 4 Mar 2017 06:06:12 +0000 (22:06 -0800)]
md/raid5: sort bios
Previous patch (raid5: only dispatch IO from raid5d for harddisk raid)
defers IO dispatching. The goal is to create better IO pattern. At that
time, we don't sort the deffered IO and hope the block layer can do IO
merge and sort. Now the raid5-cache writeback could create large amount
of bios. And if we enable muti-thread for stripe handling, we can't
control when to dispatch IO to raid disks. In a lot of time, we are
dispatching IO which block layer can't do merge effectively.
This patch moves further for the IO dispatching defer. We accumulate
bios, but we don't dispatch all the bios after a threshold is met. This
'dispatch partial portion of bios' stragety allows bios coming in a
large time window are sent to disks together. At the dispatching time,
there is large chance the block layer can merge the bios. To make this
more effective, we dispatch IO in ascending order. This increases
request merge chance and reduces disk seek.
Shaohua Li [Thu, 16 Feb 2017 03:37:32 +0000 (19:37 -0800)]
md/raid5: prioritize stripes for writeback
In raid5-cache writeback mode, we have two types of stripes to handle.
- stripes which aren't cached yet
- stripes which are cached and flushing out to raid disks
Upperlayer is more sensistive to latency of the first type of stripes
generally. But we only one handle list for all these stripes, where the
two types of stripes are mixed together. When reclaim flushes a lot of
stripes, the first type of stripes could be noticeably delayed. On the
other hand, if the log space is tight, we'd like to handle the second
type of stripes faster and free log space.
This patch destinguishes the two types stripes. They are added into
different handle list. When we try to get a stripe to handl, we prefer
the first type of stripes unless log space is tight.
Guoqing Jiang [Wed, 1 Mar 2017 08:42:40 +0000 (16:42 +0800)]
md-cluster: add the support for resize
To update size for cluster raid, we need to make
sure all nodes can perform the change successfully.
However, it is possible that some of them can't do
it due to failure (bitmap_resize could fail). So
we need to consider the issue before we set the
capacity unconditionally, and we use below steps
to perform sanity check.
1. A change the size, then broadcast METADATA_UPDATED
msg.
2. B and C receive METADATA_UPDATED change the size
excepts call set_capacity, sync_size is not update
if the change failed. Also call bitmap_update_sb
to sync sb to disk.
3. A checks other node's sync_size, if sync_size has
been updated in all nodes, then send CHANGE_CAPACITY
msg otherwise send msg to revert previous change.
4. B and C call set_capacity if receive CHANGE_CAPACITY
msg, otherwise pers->resize will be called to restore
the old value.
Guoqing Jiang [Wed, 1 Mar 2017 08:42:38 +0000 (16:42 +0800)]
md-cluster: add CHANGE_CAPACITY message type
The msg type CHANGE_CAPACITY is introduced to support
resize clustered raid in later patch, and it is sent
after all the nodes have the same sync_size, receiver
node just need to set new capacity once received this
msg.
Guoqing Jiang [Wed, 1 Mar 2017 09:30:29 +0000 (17:30 +0800)]
md-cluster: use sync way to handle METADATA_UPDATED msg
Previously, when node received METADATA_UPDATED msg, it just
need to wakeup mddev->thread, then md_reload_sb will be called
eventually.
We taken the asynchronous way to avoid a deadlock issue, the
deadlock issue could happen when one node is receiving the
METADATA_UPDATED msg (wants reconfig_mutex) and trying to run
the path:
md_check_recovery -> mddev_trylock(hold reconfig_mutex)
-> md_update_sb-metadata_update_start
(want EX on token however token is
got by the sending node)
Since we will support resizing for clustered raid, and we
need the metadata update handling to be synchronous so that
the initiating node can detect failure, so we need to change
the way for handling METADATA_UPDATED msg.
But, we obviously need to avoid above deadlock with the
sync way. To make this happen, we considered to not hold
reconfig_mutex to call md_reload_sb, if some other thread
has already taken reconfig_mutex and waiting for the 'token',
then process_recvd_msg() can safely call md_reload_sb()
without taking the mutex. This is because we can be certain
that no other thread will take the mutex, and we also certain
that the actions performed by md_reload_sb() won't interfere
with anything that the other thread is in the middle of.
To make this more concrete, we added a new cinfo->state bit
MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD
Which is set in lock_token() just before dlm_lock_sync() is
called, and cleared just after. As lock_token() is always
called with reconfig_mutex() held (the specific case is the
resync_info_update which is distinguished well in previous
patch), if process_recvd_msg() finds that the new bit is set,
then the mutex must be held by some other thread, and it will
keep waiting.
So process_metadata_update() can call md_reload_sb() if either
mddev_trylock() succeeds, or if MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD
is set. The tricky bit is what to do if neither of these apply.
We need to wait. Fortunately mddev_unlock() always calls wake_up()
on mddev->thread->wqueue. So we can get lock_token() to call
wake_up() on that when it sets the bit.
There are also some related changes inside this commit:
1. remove RELOAD_SB related codes since there are not valid anymore.
2. mddev is added into md_cluster_info then we can get mddev inside
lock_token.
3. add new parameter for lock_token to distinguish reconfig_mutex
is held or not.
And, we need to set MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD in below:
1. set it before unregister thread, otherwise a deadlock could
appear if stop a resyncing array.
This is because md_unregister_thread(&cinfo->recv_thread) is
blocked by recv_daemon -> process_recvd_msg
-> process_metadata_update.
To resolve the issue, MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD is
also need to be set before unregister thread.
2. set it in metadata_update_start to fix another deadlock.
a. Node A sends METADATA_UPDATED msg (held Token lock).
b. Node B wants to do resync, and is blocked since it can't
get Token lock, but MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD is
not set since the callchain
(md_do_sync -> sync_request
-> resync_info_update
-> sendmsg
-> lock_comm -> lock_token)
doesn't hold reconfig_mutex.
c. Node B trys to update sb (held reconfig_mutex), but stopped
at wait_event() in metadata_update_start since we have set
MD_CLUSTER_SEND_LOCK flag in lock_comm (step 2).
d. Then Node B receives METADATA_UPDATED msg from A, of course
recv_daemon is blocked forever.
Since metadata_update_start always calls lock_token with reconfig_mutex,
we need to set MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD here as well, and
lock_token don't need to set it twice unless lock_token is invoked from
lock_comm.
Finally, thanks to Neil for his great idea and help!
Linus Torvalds [Thu, 16 Mar 2017 19:30:43 +0000 (12:30 -0700)]
Merge tag 'xfs-4.11-fixes-2' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux
Pull xfs fix from Darrick Wong:
"Here's a single fix for -rc3 to improve input validation on inline
directory data to prevent buffer overruns due to corrupt metadata"
* tag 'xfs-4.11-fixes-2' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux:
xfs: verify inline directory data forks
Linus Torvalds [Thu, 16 Mar 2017 18:47:28 +0000 (11:47 -0700)]
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes/cleanups from Catalin Marinas:
"In Will's absence I'm sending the arm64 fixes he queued for 4.11-rc3:
- fix arm64 kernel boot warning when DEBUG_VIRTUAL and KASAN are
enabled
- enable KEYS_COMPAT for keyctl compat support
- use cpus_have_const_cap() for system_uses_ttbr0_pan() (slight
performance improvement)
- update kerneldoc for cpu_suspend() rename
- remove the arm64-specific kprobe_exceptions_notify (weak generic
variant defined)"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: kernel: Update kerneldoc for cpu_suspend() rename
arm64: use const cap for system_uses_ttbr0_pan()
arm64: support keyctl() system call in 32-bit mode
arm64: kasan: avoid bad virt_to_pfn()
arm64: kprobes: remove kprobe_exceptions_notify
Linus Torvalds [Thu, 16 Mar 2017 18:43:48 +0000 (11:43 -0700)]
Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/shli/md
Pull MD fixes from Shaohua Li:
- fix a parity calculation bug of raid5 cache by Song
- fix a potential deadlock issue by me
- fix two endian issues by Jason
- fix a disk limitation issue by Neil
- other small fixes and cleanup
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/shli/md:
md/raid1: fix a trivial typo in comments
md/r5cache: fix set_syndrome_sources() for data in cache
md: fix incorrect use of lexx_to_cpu in does_sb_need_changing
md: fix super_offset endianness in super_1_rdev_size_change
md/raid1/10: fix potential deadlock
md: don't impose the MD_SB_DISKS limit on arrays without metadata.
md: move funcs from pers->resize to update_size
md-cluster: remove useless memset from gather_all_resync_info
md-cluster: free md_cluster_info if node leave cluster
md: delete dead code
md/raid10: submit bio directly to replacement disk
Linus Torvalds [Wed, 15 Mar 2017 23:54:58 +0000 (16:54 -0700)]
Merge branch 'for-linus' of git://git.kernel.dk/linux-block
Pull block fixes from Jens Axboe:
"Four small fixes for this cycle:
- followup fix from Neil for a fix that went in before -rc2, ensuring
that we always see the full per-task bio_list.
- fix for blk-mq-sched from me that ensures that we retain similar
direct-to-issue behavior on running the queue.
- fix from Sagi fixing a potential NULL pointer dereference in blk-mq
on spurious CPU unplug.
- a memory leak fix in writeback from Tahsin, fixing a case where
device removal of a mounted device can leak a struct
wb_writeback_work"
* 'for-linus' of git://git.kernel.dk/linux-block:
blk-mq-sched: don't run the queue async from blk_mq_try_issue_directly()
writeback: fix memory leak in wb_queue_work()
blk-mq: Fix tagset reinit in the presence of cpu hot-unplug
blk: Ensure users for current->bio_list can see the full list.
Linus Torvalds [Wed, 15 Mar 2017 17:44:19 +0000 (10:44 -0700)]
Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Pull SCSI fixes from James Bottomley:
"This is a rather large set of fixes. The bulk are for lpfc correcting
a lot of issues in the new NVME driver code which just went in in the
merge window.
The others are:
- fix a hang in the vmware paravirt driver caused by incorrect
handling of the new MSI vector allocation
- long standing bug in storvsc, which recent block changes turned
from being a harmless annoyance into a hang
- yet more fallout (in mpt3sas) from the changes to device blocking
The remainder are small fixes and updates"
* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (34 commits)
scsi: lpfc: Add shutdown method for kexec
scsi: storvsc: Workaround for virtual DVD SCSI version
scsi: lpfc: revise version number to 11.2.0.10
scsi: lpfc: code cleanups in NVME initiator discovery
scsi: lpfc: code cleanups in NVME initiator base
scsi: lpfc: correct rdp diag portnames
scsi: lpfc: remove dead sli3 nvme code
scsi: lpfc: correct double print
scsi: lpfc: Rename LPFC_MAX_EQ_DELAY to LPFC_MAX_EQ_DELAY_EQID_CNT
scsi: lpfc: Rework lpfc Kconfig for NVME options
scsi: lpfc: add transport eh_timed_out reference
scsi: lpfc: Fix eh_deadline setting for sli3 adapters.
scsi: lpfc: add NVME exchange aborts
scsi: lpfc: Fix nvme allocation bug on failed nvme_fc_register_localport
scsi: lpfc: Fix IO submission if WQ is full
scsi: lpfc: Fix NVME CMD IU byte swapped word 1 problem
scsi: lpfc: Fix RCTL value on NVME LS request and response
scsi: lpfc: Fix crash during Hardware error recovery on SLI3 adapters
scsi: lpfc: fix missing spin_unlock on sql_list_lock
scsi: lpfc: don't dereference dma_buf->iocbq before null check
...
Linus Torvalds [Wed, 15 Mar 2017 16:33:15 +0000 (09:33 -0700)]
Merge tag 'gfs2-4.11-rc3.fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2
Pull gfs2 fix from Bob Peterson:
"This is an emergency patch for 4.11-rc3
The GFS2 developers uncovered a really nasty problem that can lead to
random corruption and kernel panic, much like the last one. Andreas
Gruenbacher wrote a simple one-line patch to fix the problem."
* tag 'gfs2-4.11-rc3.fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2:
gfs2: Avoid alignment hole in struct lm_lockname
Commit 88ffbf3e03 switches to using rhashtables for glocks, hashing over
the entire struct lm_lockname instead of its individual fields. On some
architectures, struct lm_lockname contains a hole of uninitialized
memory due to alignment rules, which now leads to incorrect hash values.
Get rid of that hole.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com> Signed-off-by: Bob Peterson <rpeterso@redhat.com> CC: <stable@vger.kernel.org> #v4.3+
Darrick J. Wong [Wed, 15 Mar 2017 07:24:25 +0000 (00:24 -0700)]
xfs: verify inline directory data forks
When we're reading or writing the data fork of an inline directory,
check the contents to make sure we're not overflowing buffers or eating
garbage data. xfs/348 corrupts an inline symlink into an inline
directory, triggering a buffer overflow bug.
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
---
v2: add more checks consistent with _dir2_sf_check and make the verifier
usable from anywhere.
1) Ensure that mtu is at least IPV6_MIN_MTU in ipv6 VTI tunnel driver,
from Steffen Klassert.
2) Fix crashes when user tries to get_next_key on an LPM bpf map, from
Alexei Starovoitov.
3) Fix detection of VLAN fitlering feature for bnx2x VF devices, from
Michal Schmidt.
4) We can get a divide by zero when TCP socket are morphed into
listening state, fix from Eric Dumazet.
5) Fix socket refcounting bugs in skb_complete_wifi_ack() and
skb_complete_tx_timestamp(). From Eric Dumazet.
6) Use after free in dccp_feat_activate_values(), also from Eric
Dumazet.
7) Like bonding team needs to use ETH_MAX_MTU as netdev->max_mtu, from
Jarod Wilson.
8) Fix use after free in vrf_xmit(), from David Ahern.
9) Don't do UDP Fragmentation Offload on IPComp ipsec packets, from
Alexey Kodanev.
10) Properly check napi_complete_done() return value in order to decide
whether to re-enable IRQs or not in amd-xgbe driver, from Thomas
Lendacky.
11) Fix double free of hwmon device in marvell phy driver, from Andrew
Lunn.
12) Don't crash on malformed netlink attributes in act_connmark, from
Etienne Noss.
13) Don't remove routes with a higher metric in ipv6 ECMP route replace,
from Sabrina Dubroca.
14) Don't write into a cloned SKB in ipv6 fragmentation handling, from
Florian Westphal.
15) Fix routing redirect races in dccp and tcp, basically the ICMP
handler can't modify the socket's cached route in it's locked by the
user at this moment. From Jon Maxwell.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (108 commits)
qed: Enable iSCSI Out-of-Order
qed: Correct out-of-bound access in OOO history
qed: Fix interrupt flags on Rx LL2
qed: Free previous connections when releasing iSCSI
qed: Fix mapping leak on LL2 rx flow
qed: Prevent creation of too-big u32-chains
qed: Align CIDs according to DORQ requirement
mlxsw: reg: Fix SPVMLR max record count
mlxsw: reg: Fix SPVM max record count
net: Resend IGMP memberships upon peer notification.
dccp: fix memory leak during tear-down of unsuccessful connection request
tun: fix premature POLLOUT notification on tun devices
dccp/tcp: fix routing redirect race
ucc/hdlc: fix two little issue
vxlan: fix ovs support
net: use net->count to check whether a netns is alive or not
bridge: drop netfilter fake rtable unconditionally
ipv6: avoid write to a possibly cloned skb
net: wimax/i2400m: fix NULL-deref at probe
isdn/gigaset: fix NULL-deref at probe
...
Linus Torvalds [Tue, 14 Mar 2017 22:00:43 +0000 (15:00 -0700)]
Merge branch 'for-4.11-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata
Pull libata fixes from Tejun Heo:
"Three libata fixes:
- fix for a circular reference bug in sysfs code which prevented
pata_legacy devices from being released after probe failure, which
in turn prevented devres from releasing the associated resources.
- drop spurious WARN in the command issue path which can be triggered
by a legitimate passthrough command.
- an ahci_qoriq specific fix"
* 'for-4.11-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata:
ahci: qoriq: correct the sata ecc setting error
libata: drop WARN from protocol error in ata_sff_qc_issue()
libata: transport: Remove circular dependency at free time
Linus Torvalds [Tue, 14 Mar 2017 21:52:08 +0000 (14:52 -0700)]
Merge branch 'for-4.11-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq
Pull workqueue fix from Tejun Heo:
"If a delayed work is queued with NULL @wq, workqueue code explodes
after the timer expires at which point it's difficult to tell who the
culprit was.
This actually happened and the offender was net/smc this time.
Add an explicit sanity check for it in the queueing path"
* 'for-4.11-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq:
workqueue: trigger WARN if queue_delayed_work() is called with NULL @wq
Linus Torvalds [Tue, 14 Mar 2017 21:48:50 +0000 (14:48 -0700)]
Merge branch 'for-4.11-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu
Pull percpu fixes from Tejun Heo:
- the allocation path was updating pcpu_nr_empty_pop_pages without the
required locking which can lead to incorrect handling of empty chunks
(e.g. keeping too many around), which is buggy but shouldn't lead to
critical failures. Fixed by adding the locking
- a trivial patch to drop an unused param from pcpu_get_pages()
* 'for-4.11-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
percpu: remove unused chunk_alloc parameter from pcpu_get_pages()
percpu: acquire pcpu_lock when updating pcpu_nr_empty_pop_pages
David S. Miller [Tue, 14 Mar 2017 18:37:06 +0000 (11:37 -0700)]
Merge branch 'qed-fixes'
Yuval Mintz says:
====================
qed: Fixes series
This address several different issues in qed.
The more significant portions:
Patch #1 would cause timeout when qedr utilizes the highest
CIDs availble for it [or when future qede adapters would utilize
queues in some constellations].
Patch #4 fixes a leak of mapped addresses; When iommu is enabled,
offloaded storage protocols might eventually run out of resources
and fail to map additional buffers.
Patches #6,#7 were missing in the initial iSCSI infrastructure
submissions, and would hamper qedi's stability when it reaches
out-of-order scenarios.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Mintz, Yuval [Tue, 14 Mar 2017 13:26:04 +0000 (15:26 +0200)]
qed: Enable iSCSI Out-of-Order
Missing in the initial submission, qed fails to propagate qedi's
request to enable OOO to firmware.
Fixes: fc831825f99e ("qed: Add support for hardware offloaded iSCSI") Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Mintz, Yuval [Tue, 14 Mar 2017 13:26:03 +0000 (15:26 +0200)]
qed: Correct out-of-bound access in OOO history
Need to set the number of entries in database, otherwise the logic
would quickly surpass the array.
Fixes: 1d6cff4fca43 ("qed: Add iSCSI out of order packet handling") Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ram Amrani [Tue, 14 Mar 2017 13:26:02 +0000 (15:26 +0200)]
qed: Fix interrupt flags on Rx LL2
Before iterating over the the LL2 Rx ring, the ring's
spinlock is taken via spin_lock_irqsave().
The actual processing of the packet [including handling
by the protocol driver] is done without said lock,
so qed releases the spinlock and re-claims it afterwards.
Problem is that the final spin_lock_irqrestore() at the end
of the iteration uses the original flags saved from the
initial irqsave() instead of the flags from the most recent
irqsave(). So it's possible that the interrupt status would
be incorrect at the end of the processing.
Fixes: 0a7fb11c23c0 ("qed: Add Light L2 support"); CC: Ram Amrani <Ram.Amrani@cavium.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Mintz, Yuval [Tue, 14 Mar 2017 13:26:01 +0000 (15:26 +0200)]
qed: Free previous connections when releasing iSCSI
Fixes: fc831825f99e ("qed: Add support for hardware offloaded iSCSI") Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Tomer Tayar [Tue, 14 Mar 2017 13:25:59 +0000 (15:25 +0200)]
qed: Prevent creation of too-big u32-chains
Current Logic would allow the creation of a chain with U32_MAX + 1
elements, when the actual maximum supported by the driver infrastructure
is U32_MAX.
Fixes: a91eb52abb50 ("qed: Revisit chain implementation") Signed-off-by: Tomer Tayar <Tomer.Tayar@cavium.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ram Amrani [Tue, 14 Mar 2017 13:25:58 +0000 (15:25 +0200)]
qed: Align CIDs according to DORQ requirement
The Doorbell HW block can be configured at a granularity
of 16 x CIDs, so we need to make sure that the actual number
of CIDs configured would be a multiplication of 16.
Today, when RoCE is enabled - given that the number is unaligned,
doorbelling the higher CIDs would fail to reach the firmware and
would eventually timeout.
Fixes: dbb799c39717 ("qed: Initialize hardware for new protocols") Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>