]> git.karo-electronics.de Git - linux-beck.git/log
linux-beck.git
11 years agoblock: partition: msdos: provide UUIDs for partitions
Stephen Warren [Fri, 9 Nov 2012 00:12:28 +0000 (16:12 -0800)]
block: partition: msdos: provide UUIDs for partitions

The MSDOS/MBR partition table includes a 32-bit unique ID, often referred
to as the NT disk signature.  When combined with a partition number within
the table, this can form a unique ID similar in concept to EFI/GPT's
partition UUID.  Constructing and recording this value in struct
partition_meta_info allows MSDOS partitions to be referred to on the
kernel command-line using the following syntax:

root=PARTUUID=0002dd75-01

Signed-off-by: Stephen Warren <swarren@nvidia.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Will Drewry <wad@chromium.org>
Cc: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
11 years agoinit: reduce PARTUUID min length to 1 from 36
Stephen Warren [Fri, 9 Nov 2012 00:12:27 +0000 (16:12 -0800)]
init: reduce PARTUUID min length to 1 from 36

Reduce the minimum length for a root=PARTUUID= parameter to be considered
valid from 36 to 1.  EFI/GPT partition UUIDs are always exactly 36
characters long, hence the previous limit.  However, the next patch will
support DOS/MBR UUIDs too, which have a different, shorter, format.
Instead of validating any particular length, just ensure that at least
some non-empty value was given by the user.

Also, consider a missing UUID value to be a parsing error, in the same
vein as if /PARTNROFF exists and can't be parsed.  As such, make both
error cases print a message and disable rootwait.  Convert to pr_err while
we're at it.

Signed-off-by: Stephen Warren <swarren@nvidia.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Will Drewry <wad@chromium.org>
Cc: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
11 years agoblock: store partition_meta_info.uuid as a string
Stephen Warren [Fri, 9 Nov 2012 00:12:25 +0000 (16:12 -0800)]
block: store partition_meta_info.uuid as a string

This will allow other types of UUID to be stored here, aside from true
UUIDs.  This also simplifies code that uses this field, since it's usually
constructed from a, used as a, or compared to other, strings.

Note: A simplistic approach here would be to set uuid_str[36]=0 whenever a
/PARTNROFF option was found to be present.  However, this modifies the
input string, and causes subsequent calls to devt_from_partuuid() not to
see the /PARTNROFF option, which causes different results.  In order to
avoid misleading future maintainers, this parameter is marked const.

Signed-off-by: Stephen Warren <swarren@nvidia.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Will Drewry <wad@chromium.org>
Cc: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
11 years agocciss: use check_signature()
Akinobu Mita [Fri, 9 Nov 2012 00:12:23 +0000 (16:12 -0800)]
cciss: use check_signature()

Use check_signature() to find a signature in the mmio address.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Mike Miller <mike.miller@hp.com>
Cc: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
11 years agocciss: cleanup bitops usage
Akinobu Mita [Fri, 9 Nov 2012 00:12:22 +0000 (16:12 -0800)]
cciss: cleanup bitops usage

- Remove unnecessary correction of bit and address
- Use BITS_TO_LONGS macro to calculate bitmap size
- Use bitmap_zero()

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Mike Miller <mike.miller@hp.com>
Cc: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
11 years agoMerge branch 'stable/for-jens-3.8' of git://git.kernel.org/pub/scm/linux/kernel/git...
Jens Axboe [Mon, 12 Nov 2012 16:18:47 +0000 (09:18 -0700)]
Merge branch 'stable/for-jens-3.8' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen into for-3.8/drivers

11 years agodrbd: use copy_highpage
Akinobu Mita [Fri, 9 Nov 2012 00:12:31 +0000 (16:12 -0800)]
drbd: use copy_highpage

Use copy_highpage() to copy from one page to another.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: if the replication link breaks during handshake, keep retrying
Lars Ellenberg [Mon, 5 Nov 2012 10:54:30 +0000 (11:54 +0100)]
drbd: if the replication link breaks during handshake, keep retrying

The 8.3.12 commit drbd: Bugfix for the connection behavior fixes a
"wasted established connection", if a former connection attempt failed
during its early stages.

However it opened a window for a regression, if a connection attempt
fails during its last stages.  The result was a terminated receiver
thread, that left behind the supposedly transient "C_UNCONNECTED" state.
Any later requests to change the connection state fail, as they wait for
the connection state to "stabilize".

Fix: short circuit and keep retrying to restablish a new connection,
if we don't reach C_WF_REPORT_PARAMS.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: check return of kmalloc in receive_uuids
Jing Wang [Thu, 25 Oct 2012 07:00:56 +0000 (15:00 +0800)]
drbd: check return of kmalloc in receive_uuids

Signed-off-by: Jing Wang <windsdaemon@gmail.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agoMerge branch 'drbd-8.4_ed6' into for-3.8-drivers-drbd-8.4_ed6
Philipp Reisner [Fri, 9 Nov 2012 13:18:43 +0000 (14:18 +0100)]
Merge branch 'drbd-8.4_ed6' into for-3.8-drivers-drbd-8.4_ed6

11 years agodrbd: Broadcast sync progress no more often than once per second
Philipp Reisner [Fri, 19 Oct 2012 12:37:47 +0000 (14:37 +0200)]
drbd: Broadcast sync progress no more often than once per second

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: don't try to clear bits once the disk has failed
Philipp Reisner [Fri, 19 Oct 2012 12:21:22 +0000 (14:21 +0200)]
drbd: don't try to clear bits once the disk has failed

If the disk has failed already, there is no point trying to change the
bitmap. drbd_set_out_of_sync() already had this safeguard,
time to add it to drbd_set_in_sync() as well.

This also prevents some warning messages, like
 FIXME asender in bm_change_bits_to, bitmap locked for 'detach' by worker
if our disk fails during resync, while there are some resync acks queued up.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: fix regression: potential NULL pointer dereference
Philipp Reisner [Fri, 19 Oct 2012 12:19:23 +0000 (14:19 +0200)]
drbd: fix regression: potential NULL pointer dereference

recent commit
    drbd: always write bitmap on detach
introduced a bitmap writeout during detach,
which obviously needs some meta data device to write to.

Unfortunately, that same error path may be taken if we fail to attach,
e.g. due to UUID mismatch, after we changed state to D_ATTACHING,
but before the lower level device pointer is even assigned.

We need to test for presence of mdev->ldev.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Fix clearing of MDF_AL_DISABLED
Philipp Reisner [Mon, 1 Oct 2012 16:04:12 +0000 (18:04 +0200)]
drbd: Fix clearing of MDF_AL_DISABLED

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: log request sector offset and size for IO errors
Lars Ellenberg [Thu, 27 Sep 2012 13:19:38 +0000 (15:19 +0200)]
drbd: log request sector offset and size for IO errors

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: always write bitmap on detach
Lars Ellenberg [Thu, 27 Sep 2012 13:18:21 +0000 (15:18 +0200)]
drbd: always write bitmap on detach

If we detach due to local read-error (which sets a bit in the bitmap),
stay Primary, and then re-attach (which re-reads the bitmap from disk),
we potentially lost the "out-of-sync" (or, "bad block") information in
the bitmap.

Always (try to) write out the changed bitmap pages before going diskless.

That way, we don't lose the bit for the bad block,
the next resync will fetch it from the peer, and rewrite
it locally, which may result in block reallocation in some
lower layer (or the hardware), and thereby "heal" the bad blocks.

If the bitmap writeout errors out as well, we will (again: try to)
mark the "we need a full sync" bit in our super block,
if it was a READ error; writes are covered by the activity log already.

If that superblock does not make it to disk either, we are sorry.

Maybe we just lost an entire disk or controller (or iSCSI connection),
and there actually are no bad blocks at all, so we don't need to
re-fetch from the peer, there is no "auto-healing" necessary.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: wait for meta data IO completion even with failed disk, unless force-detached
Lars Ellenberg [Thu, 27 Sep 2012 13:07:11 +0000 (15:07 +0200)]
drbd: wait for meta data IO completion even with failed disk, unless force-detached

The intention of force-detach is to be able to deal with a completely
unresponsive lower level IO stack, which does not even deliver error
completions anymore, but no completion at all.

In all other cases, we must still wait for the meta data IO completion.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: a few more GFP_KERNEL -> GFP_NOIO
Lars Ellenberg [Wed, 26 Sep 2012 12:22:40 +0000 (14:22 +0200)]
drbd: a few more GFP_KERNEL -> GFP_NOIO

This has not yet been observed, but conceivably, when using GFP_KERNEL
allocations from drbd_md_sync(), drbd_flush_after_epoch() or
receive_SyncParam(), we could trigger additional IO to our own device,
or an other device in a criss-cross setup, and end up in a local
deadlock, or potentially a distributed deadlock in a criss-cross setup
involving the peer blocked in a similar way waiting for us to make
progress.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: fix potential deadlock during bitmap (re-)allocation
Lars Ellenberg [Wed, 26 Sep 2012 12:18:51 +0000 (14:18 +0200)]
drbd: fix potential deadlock during bitmap (re-)allocation

The former comment arguing that GFP_KERNEL was good enough was wrong: it
did not take resize into account at all, and assumed the only path
leading here was the normal attach on a still secondary device, so no
deadlock would be possible.

Both resize on a Primary, or attach on a diskless Primary,
could potentially deadlock.

drbd_bm_resize() is called while IO to the respective device is
suspended, so we must use GFP_NOIO to avoid potential deadlock.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: use list_move_tail instead of list_del/list_add_tail
Lars Ellenberg [Wed, 26 Sep 2012 12:16:30 +0000 (14:16 +0200)]
drbd: use list_move_tail instead of list_del/list_add_tail

Using list_move_tail() instead of list_del() + list_add_tail().

spatch with a semantic match is used to found this problem.
(http://coccinelle.lip6.fr/)

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: panic on delayed completion of aborted requests
Philipp Reisner [Tue, 4 Sep 2012 13:16:20 +0000 (15:16 +0200)]
drbd: panic on delayed completion of aborted requests

"aborting" requests, or force-detaching the disk, is intended for
completely blocked/hung local backing devices which do no longer
complete requests at all, not even do error completions.  In this
situation, usually a hard-reset and failover is the only way out.

By "aborting", basically faking a local error-completion,
we allow for a more graceful swichover by cleanly migrating services.
Still the affected node has to be rebooted "soon".

By completing these requests, we allow the upper layers to re-use
the associated data pages.

If later the local backing device "recovers", and now DMAs some data
from disk into the original request pages, in the best case it will
just put random data into unused pages; but typically it will corrupt
meanwhile completely unrelated data, causing all sorts of damage.

Which means delayed successful completion,
especially for READ requests,
is a reason to panic().

We assume that a delayed *error* completion is OK,
though we still will complain noisily about it.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Fix comparison of is_valid_transition()'s return code
Philipp Reisner [Mon, 3 Sep 2012 13:39:01 +0000 (15:39 +0200)]
drbd: Fix comparison of is_valid_transition()'s return code

is_valid_transition() might return SS_NOTHING_TO_DO.

The condition function _req_st_cond() returned SS_NOTHING_TO_DO, which
caused the wait_event to abort too early. Therefore drbd_req_state()
did not consume the next CL_ST_CHG_SUCCESS or SS_CW_FAILED_BY_PEER
causing serve disruption of the state machine logic...

Detaching from a single volue was one way to trigger this bug.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Remove duplicate code
Philipp Reisner [Mon, 3 Sep 2012 12:04:23 +0000 (14:04 +0200)]
drbd: Remove duplicate code

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: differentiate early and later "postponing" of requests
Lars Ellenberg [Mon, 3 Sep 2012 12:08:35 +0000 (14:08 +0200)]
drbd: differentiate early and later "postponing" of requests

We use the RQ_POSTPONED flag to mark a request for several reasons.

It may be a conflicting request in a dual-primary setup,
where conflict detection and resolution on the peer decided that
this request needs to be re-submitted, it needs to re-enter
drbd_make_request() to fix the data divergence caused by these
conflicting, partially overlapping, quasi-simultaneous requests.

In this case we need to mark the corresponding area as out-of-sync,
before we call drbd_al_complete_io().

We also use the RQ_POSTPONED flag to just "push back" a request,
before even processing it, if IO is suspended for some reason.
In this case, as this request was neither submitted nor sent yet,
we must not touch the bitmap.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Fix postponed requests
Philipp Reisner [Wed, 29 Aug 2012 13:23:14 +0000 (15:23 +0200)]
drbd: Fix postponed requests

A postponed request might has RQ_IN_ACT_LOG already set, but
is POSTPONED before it gets something in the RQ_LOCAL_MASK
set. Up to now this caused a left-over active extent.

Fix that by only testing for the RQ_IN_ACT_LOG bit in drbd_req_destroy()

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Call drbd_md_sync() explicitly after a state change on the connection
Philipp Reisner [Tue, 28 Aug 2012 14:48:03 +0000 (16:48 +0200)]
drbd: Call drbd_md_sync() explicitly after a state change on the connection

Without this, the meta-data gets updates after 5 seconds by the
md_sync_timer. Better to do it immeditaly after a state change.

If the asender detects a network failure, it may take a bit until
the worker processes the according after-conn-state-change work item.

  The worker might be blocked in sending something, i.e. it
  takes until it gets into its timeout. That is 6 seconds by
  default which is longer than the 5 seconds of the md_sync_timer.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Fix postponed requests
Philipp Reisner [Tue, 28 Aug 2012 12:39:44 +0000 (14:39 +0200)]
drbd: Fix postponed requests

* Postponed requests should not set or clear out-of-sync marks
* When a request gets postponed we need to drop its reference
  mdev->local_cnt (put_ldev()).

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Imporve the error reporting of failed conn state changes
Philipp Reisner [Tue, 28 Aug 2012 09:46:22 +0000 (11:46 +0200)]
drbd: Imporve the error reporting of failed conn state changes

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Fix the way the STATE_SENT bit is cleared
Philipp Reisner [Tue, 28 Aug 2012 09:33:35 +0000 (11:33 +0200)]
drbd: Fix the way the STATE_SENT bit is cleared

With merging the commit
'drbd: Delay/reject other state changes while establishing a connection'
the condition check for clearing the flag was wrong.

Move the bit clearing to the __drbd_set_state() function
in order to have it already cleared for the other parts of
the function. I.e. clearing the susp_fen in the after_state_ch() function.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Do not check aspects that are not subject to change in _conn_requests_state()
Philipp Reisner [Tue, 28 Aug 2012 09:07:56 +0000 (11:07 +0200)]
drbd: Do not check aspects that are not subject to change in _conn_requests_state()

When _conn_requests_state() is used to change other parts of the state
than the connection, do not check for a valid connection transition.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Improve readability of IO resuming after freeze due to no data access
Philipp Reisner [Mon, 27 Aug 2012 15:20:12 +0000 (17:20 +0200)]
drbd: Improve readability of IO resuming after freeze due to no data access

The previous way of doing the state change was also okay since the
state change on the susp flag gets propagated from the mdev
to the tconn.

Fortunately all this goes away in drbd-9.0

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Fix IO resuming after connection was established while executing the fence...
Philipp Reisner [Mon, 27 Aug 2012 15:16:21 +0000 (17:16 +0200)]
drbd: Fix IO resuming after connection was established while executing the fence handler

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: fix potential list_add corruption
Lars Ellenberg [Wed, 22 Aug 2012 12:59:06 +0000 (14:59 +0200)]
drbd: fix potential list_add corruption

If the md_sync_timer triggers a second time,
while the work queued during the first time is still pending,
this could result in list_add() of an already added item,
and corrupt the work item list.

This likely only triggered because of the erroneous
batch-dequeueing of work items fixed with
  drbd: dequeue single work items in wait_for_work()

Still, skip queueing if md_sync_work is already queued.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: dequeue single work items in wait_for_work()
Lars Ellenberg [Wed, 22 Aug 2012 09:47:14 +0000 (11:47 +0200)]
drbd: dequeue single work items in wait_for_work()

As long as we still use drbd_queue_work_front(),
we must only dequeue the single first item during normal operation.

The comment in drbd_worker() even says so,
but bc8a5a1 drbd: remove struct drbd_tl_epoch objects (barrier works)
introduced the batch dequeueing again via list_splice_init() in
wait_for_work().

Change back to list_move() of the first item, if any.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: mutex_unlock "... must no be used in interrupt context"
Lars Ellenberg [Wed, 22 Aug 2012 14:15:26 +0000 (16:15 +0200)]
drbd: mutex_unlock "... must no be used in interrupt context"

Documentation of mutex_unlock says
we must not use it in interrupt context.
So do not call it while holding the spin_lock_irq,
but give up the spinlock temporarily.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Fix a race condition that can lead to a BUG()
Philipp Reisner [Tue, 21 Aug 2012 18:34:07 +0000 (20:34 +0200)]
drbd: Fix a race condition that can lead to a BUG()

If the preconditions for a state change change after the wait_event() we
might hit the BUG() statement in conn_set_state().

With holding the spin_lock while evaluating the condition AND until the
actual state change we ensure the the preconditions can not change anymore.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: temporarily suspend io in drbd_adm_disk_opts
Lars Ellenberg [Mon, 20 Aug 2012 12:54:48 +0000 (14:54 +0200)]
drbd: temporarily suspend io in drbd_adm_disk_opts

drbd_adm_disk_opts() does
wait_event(mdev->al_wait, lc_try_lock(mdev->act_log));
drbd_al_shrink(mdev);

If the device is very busy, this can take a very long time to succeed.
Fix this by temporarily suspending IO,
then quickly change the settings, and resume.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: don't send out P_BARRIER with stale information
Lars Ellenberg [Mon, 20 Aug 2012 09:05:23 +0000 (11:05 +0200)]
drbd: don't send out P_BARRIER with stale information

We must only send P_BARRIER for epochs we actually sent P_DATA in.

If we (re-)establish a connection, we reinitialized the
send.current_epoch_nr, but forgot to reset send.current_epoch_writes.

This could result in a spurious P_BARRIER with stale epoch information,
and a disconnect/reconnect cycle once the then "unexpected"
P_BARRIER_ACK is received:
  BAD! BarrierAck #28823 received, expected #28829!

Introduce re_init_if_first_write() and maybe_send_barrier() helpers,
and call them appropriately for read/write/set-out-of-sync requests.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: properly call drbd_rs_cancel_all() in drbd_disconnected()
Lars Ellenberg [Fri, 17 Aug 2012 13:09:13 +0000 (15:09 +0200)]
drbd: properly call drbd_rs_cancel_all() in drbd_disconnected()

drbd_disconnected() is supposed to clear the resync lru cache,
by calling drbd_rs_cancel_all().

We must do so before we call drbd_flush_workqueue(), as at least the
callback w_restart_disk_io() may wait for resync progres, and would
otherwise deadlock.

drbd_finish_peer_reqs() may again populate that cache, which will
then potentially be stale after the next resync handshake and bitmap
exchange, we have to do it again after that.

A stale resync lru cache causes no harm but ugly messages like this:
 BAD! sector=196608s enr=6 rs_left=-256 rs_failed=0 count=256 cstate=SyncTarget

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Remove dead code
Philipp Reisner [Wed, 8 Aug 2012 19:19:09 +0000 (21:19 +0200)]
drbd: Remove dead code

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Avoid NetworkFailure state during disconnect
Philipp Reisner [Wed, 8 Aug 2012 19:19:09 +0000 (21:19 +0200)]
drbd: Avoid NetworkFailure state during disconnect

Disconnecting is a cluster wide state change. In case the peer node agrees
to the state transition, it sends back the fact on the meta-data connection
and closes both sockets.

In case the node node that initiated the state transfer sees the closing
action on the data-socket, before the P_STATE_CHG_REPLY packet, it was
going into one of the network failure states.

At least with the fencing option set to something else thatn "dont-care",
the unclean shutdown of the connection causes a short IO freeze or
a fence operation.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Protect accesses to the uuid set with a spinlock
Philipp Reisner [Wed, 8 Aug 2012 19:19:09 +0000 (21:19 +0200)]
drbd: Protect accesses to the uuid set with a spinlock

There is at least the worker context, the receiver context, the context of
receiving netlink packts.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Write all pages of the bitmap after an online resize
Philipp Reisner [Tue, 14 Aug 2012 09:46:59 +0000 (11:46 +0200)]
drbd: Write all pages of the bitmap after an online resize

We need to write the whole bitmap after we moved the meta data
due to an online resize operation.

With the support for one peta byte devices bitmap IO was optimized
to only write out touched pages. This optimization must be turned
off when writing the bitmap after an online resize.

This issue was introduced with drbd-8.3.10.

The impact of this bug is that after an online resize, the next
resync could become larger than expected.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Fix completion of requests while the device is suspended
Philipp Reisner [Tue, 14 Aug 2012 09:28:52 +0000 (11:28 +0200)]
drbd: Fix completion of requests while the device is suspended

In various places (E.g. CONNECTION_LOST_WHILE_PENDING) the
RQ_COMPLETION_SUSP mask is passed in the clear set to mod_rq_state().

The issue was that it tried to clear the RQ_COMPLETION_SUSP bit
out of the state mask first, and eventuelly set it afterwards,
in the drbd_req_put_completion_ref() function.

Fixed that by moving the reference getting out of
drbd_req_put_completion_ref() into the mod_rq_state(), before the place
where the extra reference might be put.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Don't unregister socket state_change callback from within the callback
Andreas Gruenbacher [Fri, 10 Aug 2012 15:00:30 +0000 (17:00 +0200)]
drbd: Don't unregister socket state_change callback from within the callback

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: disambiguation, s/ERR_DISCARD/ERR_DISCARD_IMPOSSIBLE/
Lars Ellenberg [Wed, 1 Aug 2012 10:46:20 +0000 (12:46 +0200)]
drbd: disambiguation, s/ERR_DISCARD/ERR_DISCARD_IMPOSSIBLE/

If for some reason (typically "split-brained" cluster manager)
drbd replica data has diverged, we can chose a victim,
and reconnect using "--discard-my-data", causing the victim
to become sync-target, fetching all changed blocks from the peer.

If we are Primary, we are potentially in use, and we refuse to
"roll back" changes to the data below the page cache and other users.

Rename the error symbol for this to ERR_DISCARD_IMPOSSIBLE.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: disambiguation, s/DISCARD_CONCURRENT/RESOLVE_CONFLICTS/
Lars Ellenberg [Wed, 1 Aug 2012 10:43:01 +0000 (12:43 +0200)]
drbd: disambiguation, s/DISCARD_CONCURRENT/RESOLVE_CONFLICTS/

We don't discard anything here, really.
We resolve conflicting, concurrent writes to overlapping data blocks.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: disambiguation, s/P_DISCARD_WRITE/P_SUPERSEDED/
Lars Ellenberg [Wed, 1 Aug 2012 10:33:51 +0000 (12:33 +0200)]
drbd: disambiguation, s/P_DISCARD_WRITE/P_SUPERSEDED/

To avoid confusion with REQ_DISCARD aka TRIM, rename our
"discard concurrent write acks" from P_DISCARD_WRITE to P_SUPERSEDED.

At the same time, rename the drbd request event DISCARD_WRITE
to CONFLICT_RESOLVED. It already triggers both successful completion
or restart of the request, depending on our RQ_POSTPONED flag.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: cleanup, drop unused struct
Lars Ellenberg [Wed, 1 Aug 2012 10:30:26 +0000 (12:30 +0200)]
drbd: cleanup, drop unused struct

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: NEG_ACK does not imply a barrier-ack
Lars Ellenberg [Tue, 7 Aug 2012 04:47:14 +0000 (06:47 +0200)]
drbd: NEG_ACK does not imply a barrier-ack

Don't drop a request from the transfer log just because it was NEG_ACKED.
We need it around to be able to verify P_BARRIER_ACKs against the
transver log.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: only start a new epoch, if the current epoch contains writes
Lars Ellenberg [Tue, 7 Aug 2012 04:42:09 +0000 (06:42 +0200)]
drbd: only start a new epoch, if the current epoch contains writes

Almost all code paths calling start_new_tl_epoch() guarded it with
if (... current_tle_writes > 0 ... ).
Just move that inside start_new_tl_epoch().

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Finish requests that completed while IO was frozen
Philipp Reisner [Tue, 7 Aug 2012 11:28:00 +0000 (13:28 +0200)]
drbd: Finish requests that completed while IO was frozen

Requests of an acked epoch are stored on the barrier_acked_requests list. In
case the private bio of such a request completes while IO on the drbd device
is suspended [req_mod(completed_ok)] then the request stays there.

When thawing IO because the fence_peer handler returned, then we use
tl_clear() to apply the connection_lost_while_pending event to all requests
on the transfer-log and the barrier_acked_requests list.

Up to now the connection_lost_while_pending event was not applied
on requests on the barrier_acked_requests list. Fixed that.

I.e. now the connection_lost_while_pending and resend events are
applied to requests on the barrier_acked_requests list. For that
it is necessary that the resend event finishes (local only)
READS correctly.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Fix a potential issue with the DISCARD_CONCURRENT flag
Lars Ellenberg [Fri, 3 Aug 2012 23:07:55 +0000 (01:07 +0200)]
drbd: Fix a potential issue with the DISCARD_CONCURRENT flag

The DISCARD_CONCURRENT flag should be set on one node and cleared on the
other node.
As the code was before it was theoretical possible that a node accepts the
meta socket, but has to close it later on, and keeps the DISCARD_CONCURRENT
flag.
Correct this by moving the clear_bit(DISCARD_CONCURRENT) where the packet
gets sent.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: fix drbd wire compatibility for empty flushes
Lars Ellenberg [Fri, 3 Aug 2012 00:19:09 +0000 (02:19 +0200)]
drbd: fix drbd wire compatibility for empty flushes

DRBD has a concept of request epochs or reorder-domains,
which are separated on the wire by P_BARRIER packets.

Older DRBD is not able to handle zero-sized requests at all,
so we need to map empty flushes to these drbd barriers.

These are the equivalent of empty flushes, and
by default trigger flushes on the receiving side anyways
(unless not supported or explicitly disabled),
so there is no need to handle this differently in newer drbd either.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: More random to the connect logic
Philipp Reisner [Wed, 1 Aug 2012 12:53:39 +0000 (14:53 +0200)]
drbd: More random to the connect logic

Since the listening socket is open all the time, it was possible to
get into stable "initial packet S crossed" loops.

* when both sides realize in the drbd_socket_okay() call at the end
  of the loop that the other side closed the main socket you had
  the chance to get into a stable loop with repeated "packet S crossed"
  messages.

* when both sides do not realize with the drbd_socket_okay() call at the end
  of the loop that the other side closed the main socket you had
  the chance to get into a stable loop with alternating "packet S crossed"
  "packet M crossed" messages.

In order to break out these stable loops randomize the behaviour if
such a crossing of P_INITIAL_DATA or P_INITIAL_META packets is detected.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Try to connec to peer only once per cycle
Philipp Reisner [Wed, 1 Aug 2012 09:41:01 +0000 (11:41 +0200)]
drbd: Try to connec to peer only once per cycle

Since now our listening socket is open all the time we will get
connection tries of the peer always in. No need to try it three
times.

This is valid when connecting to older peers as well, it simply
increases the probability that the new version DRBD will accept
a connection instead that it will establish one.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Remove redundant and wrong test for NULL simplification in conn_connect()
Philipp Reisner [Thu, 26 Jul 2012 12:12:59 +0000 (14:12 +0200)]
drbd: Remove redundant and wrong test for NULL simplification in conn_connect()

Since the drbd_socket_okay() function itself tests if the the
socket is NULL, the explicit test "if (sock.socket && &msock.socket)"
was redundent.
Apart from that the address opperator ('&') before msock.socket rendered
the test pointless.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: pass some more information to userspace.
Philipp Marek [Sat, 3 Mar 2012 20:04:30 +0000 (21:04 +0100)]
drbd: pass some more information to userspace.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: announce FLUSH/FUA capability to upper layers
Lars Ellenberg [Mon, 30 Jul 2012 07:00:54 +0000 (09:00 +0200)]
drbd: announce FLUSH/FUA capability to upper layers

In 8.4, we may have bios spanning two activity log extents.
Fixup drbd_al_begin_io() and drbd_al_complete_io() to deal with zero sized bios.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: introduce stop-sector to online verify
Lars Ellenberg [Thu, 26 Jul 2012 12:09:49 +0000 (14:09 +0200)]
drbd: introduce stop-sector to online verify

We now can schedule only a specific range of sectors for online verify,
or interrupt a running verify without interrupting the connection.

Had to bump the protocol version differently, we are now 101.
Added verify_can_do_stop_sector() { protocol >= 97 && protocol != 100; }

Also, the return value convention for worker callbacks has changed,
we returned "true/false" for "keep the connection up" in 8.3,
we return 0 for success and <= for failure in 8.4.
Affected: receive_state()

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: flush drbd work queue before invalidate/invalidate remote
Lars Ellenberg [Mon, 30 Jul 2012 07:11:38 +0000 (09:11 +0200)]
drbd: flush drbd work queue before invalidate/invalidate remote

If you do back to back wait-sync/invalidate on a Primary in a tight loop,
during application IO load, you could trigger a race:
  kernel: block drbd6: FIXME going to queue 'set_n_write from StartingSync'
    but 'write from resync_finished' still pending?

Fix this by changing the order of the drbd_queue_work() and
the wake_up() in dec_ap_pending(), and adding the additional
drbd_flush_workqueue() before requesting the full sync.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: call local-io-error handler early
Lars Ellenberg [Mon, 30 Jul 2012 07:11:01 +0000 (09:11 +0200)]
drbd: call local-io-error handler early

In case we want to hard-reset from the local-io-error handler,
we need to call it before notifying the peer or aborting local IO.
Otherwise the peer will advance its data generation UUIDs even
if secondary.

This way, local io error looks like a "regular" node crash,
which reduces the number of different failure cases.
This may be useful in a bigger picture where crashed or otherwise
"misbehaving" nodes are automatically re-deployed.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: do not reset rs_pending_cnt too early
Lars Ellenberg [Mon, 30 Jul 2012 07:10:41 +0000 (09:10 +0200)]
drbd: do not reset rs_pending_cnt too early

Fix asserts like
  block drbd0: in got_BlockAck:4634: rs_pending_cnt = -35 < 0 !

We reset the resync lru cache and related information (rs_pending_cnt),
once we successfully finished a resync or online verify, or if the
replication connection is lost.

We also need to reset it if a resync or online verify is aborted
because a lower level disk failed.

In that case the replication link is still established,
and we may still have packets queued in the network buffers
which want to touch rs_pending_cnt.

We do not have any synchronization mechanism to know for sure when all
such pending resync related packets have been drained.

To avoid this counter to go negative (and violate the ASSERT that it
will always be >= 0), just do not reset it when we lose a disk.

It is good enough to make sure it is re-initialized before the next
resync can start: reset it when we re-attach a disk.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: reset congestion information before reporting it in /proc/drbd
Lars Ellenberg [Mon, 30 Jul 2012 07:09:36 +0000 (09:09 +0200)]
drbd: reset congestion information before reporting it in /proc/drbd

We cache the congestion status in mdev->congestion_reason whenever
drbd_congested() was called.
Reset this cached info before reporting it when reading /proc/drbd.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: report congestion if we are waiting for some userland callback
Lars Ellenberg [Mon, 30 Jul 2012 07:08:25 +0000 (09:08 +0200)]
drbd: report congestion if we are waiting for some userland callback

If the drbd worker thread is synchronously waiting for some userland
callback, we don't want some casual pageout to block on us.
Have drbd_congested() report congestion in that case.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: differentiate between normal and forced detach
Lars Ellenberg [Mon, 30 Jul 2012 07:07:28 +0000 (09:07 +0200)]
drbd: differentiate between normal and forced detach

Aborting local requests (not waiting for completion from the lower level
disk) is dangerous: if the master bio has been completed to upper
layers, data pages may be re-used for other things already.
If local IO is still pending and later completes,
this may cause crashes or corrupt unrelated data.

Only abort local IO if explicitly requested.
Intended use case is a lower level device that turned into a tarpit,
not completing io requests, not even doing error completion.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: cleanup, remove two unused global flags
Lars Ellenberg [Mon, 30 Jul 2012 07:07:04 +0000 (09:07 +0200)]
drbd: cleanup, remove two unused global flags

The two unused "global flags" in 8.3 are "per volume" flags in 8.4.
Still, they are unused, so lose them.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: fix null pointer dereference with on-congestion policy when diskless
Lars Ellenberg [Mon, 30 Jul 2012 07:06:26 +0000 (09:06 +0200)]
drbd: fix null pointer dereference with on-congestion policy when diskless

We must not look at mdev->actlog, unless we have a get_ldev() reference.
It also does not make much sense to try to disconnect or pull-ahead of
the peer, if we don't have good local data.

Only even consider congestion policies, if our local disk is D_UP_TO_DATE.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: take error path in drbd_adm_down if interrupted by signal
Lars Ellenberg [Tue, 24 Jul 2012 08:13:55 +0000 (10:13 +0200)]
drbd: take error path in drbd_adm_down if interrupted by signal

drbd_adm_down() does adm_detach(), which can fail with various error
codes, or be interrupted by a signal.

The interrupted by signal case was not properly handled,
leading to
block drbd0: ASSERT( mdev->state.disk == D_DISKLESS &&
                     mdev->state.conn == C_STANDALONE ) in drbd/drbd_worker.c
and further to destroying objects while still in use, and resulting crashes.

Detect the interruption, and take the error path out.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: allow read requests to be retried after force-detach
Lars Ellenberg [Tue, 24 Jul 2012 08:12:36 +0000 (10:12 +0200)]
drbd: allow read requests to be retried after force-detach

Sometimes, a lower level block device turns into a tar-pit,
not completing requests at all, not even doing error completion.

We can force-detach from such a tar-pit block device,
either by disk-timeout, or by drbdadm detach --force.

Queueing for retry only from the request destruction path (kref hit 0)
makes it impossible to retry affected read requests from the peer,
until the local IO completion happened, as the locally submitted
bio holds a reference on the drbd request object.

If we can only complete READs when the local completion finally
happens, we would not need to force-detach in the first place.

Instead, queue for retry where we otherwise had done the error completion.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: __req_mod: make DISCARD_WRITE and independend case
Lars Ellenberg [Tue, 24 Jul 2012 07:31:18 +0000 (09:31 +0200)]
drbd: __req_mod: make DISCARD_WRITE and independend case

cherry-picked and adapted from drbd 9 devel branch

This looks cleaner to me,
and also gets rid of the other ugly if-inside-case-fall-through.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: base completion and destruction of requests on ref counts
Lars Ellenberg [Tue, 24 Jan 2012 16:19:42 +0000 (17:19 +0100)]
drbd: base completion and destruction of requests on ref counts

cherry-picked and adapted from drbd 9 devel branch

The logic for when to get or put a reference is in mod_rq_state().

To not get confused in the freeze/thaw respectively resend/restart
paths, or when cleaning up requests waiting for P_BARRIER_ACK, this
also introduces additional state flags:
RQ_COMPLETION_SUSP, and RQ_EXP_BARR_ACK.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: introduce completion_ref and kref to struct drbd_request
Lars Ellenberg [Tue, 24 Jan 2012 15:58:11 +0000 (16:58 +0100)]
drbd: introduce completion_ref and kref to struct drbd_request

cherry-picked and adapted from drbd 9 devel branch

completion_ref will count pending events necessary for completion.
kref is for destruction.

This only introduces these new members of struct drbd_request,
a followup patch will make actual use of them.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: __drbd_make_request() is now void
Lars Ellenberg [Tue, 24 Jan 2012 15:49:58 +0000 (16:49 +0100)]
drbd: __drbd_make_request() is now void

The previous commit causes __drbd_make_request() to always return 0.
Change it to void.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: better separate WRITE and READ code paths in drbd_make_request
Lars Ellenberg [Thu, 29 Mar 2012 15:04:14 +0000 (17:04 +0200)]
drbd: better separate WRITE and READ code paths in drbd_make_request

cherry-picked and adapted from drbd 9 devel branch

READs will be interesting to at most one connection,
WRITEs should be interesting for all established connections.

Introduce some helper functions to hopefully make this easier to follow.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: remove struct drbd_tl_epoch objects (barrier works)
Lars Ellenberg [Mon, 28 Nov 2011 14:04:49 +0000 (15:04 +0100)]
drbd: remove struct drbd_tl_epoch objects (barrier works)

cherry-picked and adapted from drbd 9 devel branch

DRBD requests (struct drbd_request) are already on the per resource
transfer log list, and carry their epoch number. We do not need to
additionally link them on other ring lists in other structs.

The drbd sender thread can recognize itself when to send a P_BARRIER,
by tracking the currently processed epoch, and how many writes
have been processed for that epoch.

If the epoch of the request to be processed does not match the currently
processed epoch, any writes have been processed in it, a P_BARRIER for
this last processed epoch is send out first.
The new epoch then becomes the currently processed epoch.

To not get stuck in drbd_al_begin_io() waiting for P_BARRIER_ACK,
the sender thread also needs to handle the case when the current
epoch was closed already, but no new requests are queued yet,
and send out P_BARRIER as soon as possible.

This is done by comparing the per resource "current transfer log epoch"
(tconn->current_tle_nr) with the per connection "currently processed
epoch number" (tconn->send.current_epoch_nr), while waiting for
new requests to be processed in wait_for_work().

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: move the drbd_work_queue from drbd_socket to drbd_connection
Lars Ellenberg [Mon, 14 Nov 2011 14:42:37 +0000 (15:42 +0100)]
drbd: move the drbd_work_queue from drbd_socket to drbd_connection

cherry-picked and adapted from drbd 9 devel branch
In 8.4, we don't distinguish between "resource work" and "connection
work" yet, we have one worker for both, as we still have only one connection.

We only ever used the "data.work",
no need to keep the "meta.work" around.

Move tconn->data.work to tconn->sender_work.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: allow to dequeue batches of work at a time
Lars Ellenberg [Wed, 19 Oct 2011 09:50:57 +0000 (11:50 +0200)]
drbd: allow to dequeue batches of work at a time

cherry-picked and adapted from drbd 9 devel branch

In 8.4, we still use drbd_queue_work_front(),
so in normal operation, we can not dequeue batches,
but only single items.

Still, followup commits will wake the worker
without explicitly queueing a work item,
so up() is replaced by a simple wake_up().

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: transfer log epoch numbers are now per resource
Lars Ellenberg [Thu, 17 Nov 2011 10:49:46 +0000 (11:49 +0100)]
drbd: transfer log epoch numbers are now per resource

cherry-picked from drbd 9 devel branch.

In preparation of multiple connections, the "barrier number" or
"epoch number" needs to be tracked per-resource, not per connection.
The sequence number space will not be reset anymore.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: rename drbd_restart_write to drbd_restart_request
Lars Ellenberg [Tue, 17 Jul 2012 08:05:04 +0000 (10:05 +0200)]
drbd: rename drbd_restart_write to drbd_restart_request

Meanwhile, this is used to restart failed READ requests as well.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: fix wrong assert in completion/retry path of failed local reads
Lars Ellenberg [Fri, 8 Jun 2012 14:39:24 +0000 (16:39 +0200)]
drbd: fix wrong assert in completion/retry path of failed local reads

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: fix local read error hung forever
Lars Ellenberg [Fri, 8 Jun 2012 14:30:30 +0000 (16:30 +0200)]
drbd: fix local read error hung forever

The commit
    drbd: simplify retry path of failed READ requests
simplified it too much:
it just did not do anything for local read errors.

Add the missing req_may_be_completed_not_susp() to the
READ_COMPLETED_WITH_ERROR case.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: fix access of unallocated pages and kernel panic
Lars Ellenberg [Fri, 8 Jun 2012 13:06:39 +0000 (15:06 +0200)]
drbd: fix access of unallocated pages and kernel panic

BUG: unable to handle kernel NULL pointer dereference at (null)
...
 [<d1e17561>] ? _drbd_bm_set_bits+0x151/0x240 [drbd]
 [<d1e236f8>] ? receive_bitmap+0x4f8/0xbc0 [drbd]

This fixes an off-by-one error in the receive_bitmap() path,
if run-length encoded bitmap transfer is enabled.

If the bitmap is an exact multiple of PAGE_SIZE, which means the visible
capacity of the drbd device is an exact multiple of 128 MiB (for 4k page
size), and bitmap compression (use-rle) is enabled (which became default
with 8.4), and the very last bit is dirty and reported in an rle
comressed bitmap packet, we ended up trying to kmap_atomic a page pointer
that does not exist (bitmap->bm_pages[last index + 1]).

bug introduced by:
    Date:   Fri Jul 24 15:33:24 2009 +0200
    set bits: optimize for complete last word, fix off-by-one-word corner case

made effective by:
    Date:   Thu Dec 16 00:32:38 2010 +0100
    drbd: get rid of unused debug code

    Long time ago, we had paranoia code in the bitmap that allocated one
    extra word, assigned a magic value, and checked on every occasion that
    the magic value was still unchanged.

    That debug code is unused, the extra long word complicates code a bit.
    Get rid of it.

No-one triggered this bug in the last few years, because a large subset
of our userbase is unaffected:
 * typically the last few blocks of a device are not modified
   frequently, and remain unset
 * use-rle was disabled by default in drbd < 8.4
 * those with slightly "odd" device sizes, or
 * drbd internal meta data (which will skew the device size slightly,
   thus makes it harder to have a bug relevant device size)

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Keep the listening socket open while trying to connect to the peer
Philipp Reisner [Thu, 12 Jul 2012 12:22:37 +0000 (14:22 +0200)]
drbd: Keep the listening socket open while trying to connect to the peer

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: pull prepare_listen_socket() out of drbd_wait_for_connect()
Philipp Reisner [Thu, 12 Jul 2012 09:08:34 +0000 (11:08 +0200)]
drbd: pull prepare_listen_socket() out of drbd_wait_for_connect()

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: New disk option al-updates
Philipp Reisner [Mon, 20 Feb 2012 20:53:28 +0000 (21:53 +0100)]
drbd: New disk option al-updates

By disabling al-updates one might increase performace. The price for
that is that in case a crashed primary (that had al-updates disabled)
is reintegraded, it will receive a full-resync instead of a bitmap
based resync.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Stop using NLA_PUT*().
Andreas Gruenbacher [Wed, 11 Jul 2012 18:36:03 +0000 (20:36 +0200)]
drbd: Stop using NLA_PUT*().

These macros no longer exist in kernel version v3.5-rc1.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Remove drbd_accept() and use kernel_accept() instead
Philipp Reisner [Thu, 12 Jul 2012 08:25:35 +0000 (10:25 +0200)]
drbd: Remove drbd_accept() and use kernel_accept() instead

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Move the call to listen() out of drbd_accept()
Philipp Reisner [Thu, 12 Jul 2012 08:22:48 +0000 (10:22 +0200)]
drbd: Move the call to listen() out of drbd_accept()

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: use bitmap_parse instead of __bitmap_parse
Philipp Reisner [Mon, 30 Apr 2012 10:53:52 +0000 (12:53 +0200)]
drbd: use bitmap_parse instead of __bitmap_parse

The buffer 'sc.cpu_mask' is a kernel buffer.  If bitmap_parse is used
instead of __bitmap_parse the extra parameter that indicates a kernel
buffer is not needed.

Signed-off-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Cc: Lars Ellenberg <drbd-dev@lists.linbit.com>
Cc: Philipp Reisner <philipp.reisner@linbit.com>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: grammar fix in log message
Lars Ellenberg [Mon, 7 May 2012 11:09:00 +0000 (13:09 +0200)]
drbd: grammar fix in log message

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: bm_page_async_io: properly initialize page->private
Lars Ellenberg [Mon, 7 May 2012 11:04:03 +0000 (13:04 +0200)]
drbd: bm_page_async_io: properly initialize page->private

If bm_page_async_io is advised to use a new page for I/O
(BM_AIO_COPY_PAGES is set), it will get it from a mempool.
Once the mempool has to dip into its reserves the page is
not reinitialized, i.e. page->private contains garbage, which
will lead to various problems once the I/O completes (dereferences
of NULL pointers, the submitting thread getting stuck in D-state,
 ...).

Signed-off-by: Arne Redlich <arne.redlich@googlemail.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: allow bitmap to change during writeout from resync_finished
Lars Ellenberg [Mon, 7 May 2012 10:07:18 +0000 (12:07 +0200)]
drbd: allow bitmap to change during writeout from resync_finished

Symptom: messages similar to
 "FIXME asender in bm_change_bits_to,
  bitmap locked for 'write from resync_finished' by worker"

If a resync or verify is finished (or aborted), a full bitmap writeout
is triggered.  If we have ongoing local IO, the bitmap may still change
during that writeout, pending and not yet processed acks may cause bits
to be cleared, while new writes may cause bits to be to be set.

To fix this, introduce the drbd_bm_write_copy_pages() variant.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: fix race between drbdadm invalidate/verify and finishing resync
Lars Ellenberg [Mon, 7 May 2012 10:00:56 +0000 (12:00 +0200)]
drbd: fix race between drbdadm invalidate/verify and finishing resync

When a resync or online verify is finished or aborted,
drbd does a bulk write-out of changed bitmap pages.

If *in that very moment* a new verify or resync is triggered,
this can race:
 ASSERT( !test_bit(BITMAP_IO, &mdev->flags) ) in drbd_main.c
 FIXME going to queue 'set_n_write from StartingSync' but 'write from resync_finished' still pending?
and similar.

This can be observed with e.g. tight invalidate loops in test scripts,
and probably has no real-life implication.

Still, that race can be solved by first quiescen the device,
before starting a new resync or verify.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: fix resend/resubmit of frozen IO
Lars Ellenberg [Mon, 7 May 2012 09:53:08 +0000 (11:53 +0200)]
drbd: fix resend/resubmit of frozen IO

DRBD can freeze IO, due to fencing policy (fencing resource-and-stonith),
or because we lost access to data (on-no-data-accessible suspend-io).

Resuming from there (re-connect, or re-attach, or explicit admin
intervention) should "just work".

Unfortunately, if the re-attach/re-connect did not happen within
the timeout, since the commit

  drbd: Implemented real timeout checking for request processing time

if so configured, the request_timer_fn() would timeout and
detach/disconnect virtually immediately.

This change tracks the most recent attach and connect, and does not
timeout within <configured timeout interval> after attach/connect.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: fix spelling, remove boring development log message
Philipp Reisner [Fri, 6 Apr 2012 10:13:18 +0000 (12:13 +0200)]
drbd: fix spelling, remove boring development log message

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Ensure that data_size is not 0 before using data_size-1 as index
Philipp Reisner [Fri, 6 Apr 2012 10:08:51 +0000 (12:08 +0200)]
drbd: Ensure that data_size is not 0 before using data_size-1 as index

This could be exploited by a peer which runs modified code.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Delay/reject other state changes while establishing a connection
Philipp Reisner [Fri, 6 Apr 2012 10:07:34 +0000 (12:07 +0200)]
drbd: Delay/reject other state changes while establishing a connection

Changes to the role and disk state should be delayed or rejected
while we establish a connection.

This is necessary, since the peer will base its resync decision
on the UUIDs and the state we sent in the drbd_connect() function.

The most prominent example for this race is becoming primary after
sending state and UUIDs and before the state changes to C_WF_CONNECTION.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: Fixed processing of disk-barrier, disk-flushes and disk-drain
Philipp Reisner [Fri, 30 Mar 2012 12:12:15 +0000 (14:12 +0200)]
drbd: Fixed processing of disk-barrier, disk-flushes and disk-drain

Since drbd_bump_write_ordering() is called in the attaching
process while the disk state is D_ATTACHING, it was not
considering these three flags during attach.

A call to this function was missing form drbd_adm_disk_opts().

Fixed both issues.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
11 years agodrbd: ignore volume number for drbd barrier packet exchange
Lars Ellenberg [Mon, 26 Mar 2012 18:55:17 +0000 (20:55 +0200)]
drbd: ignore volume number for drbd barrier packet exchange

Transfer log epochs, and therefore P_BARRIER packets,
are per resource, not per volume.
We must not associate them with "some random volume".

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>