Matthew Wilcox [Fri, 6 Jan 2012 20:52:56 +0000 (13:52 -0700)]
NVMe: Simplify nvme_unmap_user_pages
By using the iod->nents field (the same way other I/O paths do), we can
avoid recalculating the number of sg entries at unmap time, and make
nvme_unmap_user_pages() easier to call.
Also, use the 'write' parameter instead of assuming DMA_FROM_DEVICE.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Fri, 6 Jan 2012 20:42:45 +0000 (13:42 -0700)]
NVMe: Fix DMA mapping for admin commands
We were always mapping as DMA_FROM_DEVICE then unmapping with
DMA_TO_DEVICE which was clearly not correct. Follow the same pattern as
nvme_submit_io() and key off the bottom bit of the opcode to determine
whether this is a read or a write.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Tue, 20 Dec 2011 18:34:52 +0000 (13:34 -0500)]
NVMe: Merge the nvme_bio and nvme_prp data structures
The new merged data structure is called nvme_iod. This improves performance
for mid-sized I/Os (in the 16k range) since we save a memory allocation.
It is also a slightly simpler interface to use.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Sat, 15 Oct 2011 11:33:46 +0000 (07:33 -0400)]
NVMe: Simplify completion handling
Instead of encoding the handler type in the bottom two bits of the
per-completion context pointer, store the handler function as well
as the context pointer. This gives us more flexibility and the code
is clearer. It comes at the cost of an extra 8k of memory per queue,
but this feels like a reasonable price to pay.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Fri, 4 Nov 2011 20:24:23 +0000 (16:24 -0400)]
NVMe: Update Identify Controller data structure
The driver was still using an old definition of Identify Controller
which only came to light once we started using the 'number of namespaces'
field properly.
The existing calculation underestimated the number of pages required
as it did not take into account the pointer at the end of each page.
The replacement calculation may overestimate the number of pages required
if the last page in the PRP List is entirely full. By using ->npages
as a counter as we fill in the pages, we ensure that we don't try to
free a page that was never allocated.
Signed-off-by: Nisheeth Bhat <nisheeth.bhat@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Mon, 19 Sep 2011 21:08:14 +0000 (17:08 -0400)]
NVMe: Create nvme_identify and nvme_get_features functions
Instead of open-coding calls to nvme_submit_admin_cmd, these
small wrappers are simpler to use (the patch removes 14 lines from
nvme_dev_add() for example).
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Tue, 13 Sep 2011 21:01:39 +0000 (17:01 -0400)]
NVMe: Correct sg list setup in nvme_map_user_pages
Our SG list was constructed to always fill the entire first page, even
if that was more than the length of the I/O. This is probably harmless,
but some IOMMUs might do something bad.
Correcting the first call to sg_set_page() made it look a lot closer to
the sg_set_page() in the loop, so fold the first call to sg_set_page()
into the loop.
Reported-by: Nisheeth Bhat <nisheeth.bhat@intel.com> Signed-off-by: Matthew Wilcox <willy@linux.intel.com>
Matthew Wilcox [Fri, 20 May 2011 17:03:42 +0000 (13:03 -0400)]
NVMe: Rework ioctls
Remove the special-purpose IDENTIFY, GET_RANGE_TYPE, DOWNLOAD_FIRMWARE
and ACTIVATE_FIRMWARE commands. Replace them with a generic ADMIN_CMD
ioctl that can submit any admin command.
Add a new ID ioctl that returns the namespace ID of the queried device.
It corresponds to the SCSI Idlun ioctl.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Fri, 20 May 2011 13:34:43 +0000 (09:34 -0400)]
NVMe: Add the nvme thread to the wait queue before waking it up
If the I/O was not completed by a single NVMe command, we add the
bio to the congestion list and wake up the kthread to resubmit it.
But the kthread calls remove_wait_queue() unconditionally, which
will oops if it's not on the wait queue. So add the kthread to
the wait queue before waking it up.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Wed, 11 May 2011 20:30:59 +0000 (13:30 -0700)]
NVMe: Return real error from nvme_create_queue
nvme_setup_io_queues() was assuming that a NULL return from
nvme_create_queue() was an out-of-memory error. That's not necessarily
true; the adapter might return -EIO, for example. Change the calling
convention to return an ERR_PTR on failure instead of NULL.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Thu, 12 May 2011 17:51:41 +0000 (13:51 -0400)]
NVMe: Handle failures from memory allocations in nvme_setup_prps
If any of the memory allocations in nvme_setup_prps fail, handle it by
modifying the passed-in data length to reflect the number of bytes we are
actually able to send. Also allow the caller to specify the GFP flags
they need; for user-initiated commands, we can use GFP_KERNEL allocations.
The various callers are updated to handle this possibility; the main
I/O path is already prepared for this possibility (as it may happen
due to nvme_map_bio being unable to map all the segments of the I/O).
The other callers return -ENOMEM instead of doing partial I/Os.
Reported-by: Andi Kleen <andi@firstfloor.org> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Fri, 6 May 2011 12:45:47 +0000 (08:45 -0400)]
NVMe: Use an IDA to allocate minor numbers
The current approach of using the namespace ID as the minor number
doesn't work when there are multiple adapters in the machine. Rather
than statically partitioning the number of namespaces between adapters,
dynamically allocate minor numbers to namespaces as they are detected.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Tue, 19 Apr 2011 19:04:20 +0000 (15:04 -0400)]
NVMe: Time out initialisation after a few seconds
THe device reports (in its capability register) how long it will take
to initialise. If that time elapses before the ready bit becomes set,
conclude the device is broken and refuse to initialise it. Log a nice
error message so the user knows why we did nothing.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Tue, 22 Mar 2011 19:55:45 +0000 (15:55 -0400)]
NVMe: Correct the Controller Configuration settings
The arbitration field was extended by one bit, shifting the shutdown
notification bits by one. Also, the SQ/CQ entry size was made
configurable for future extensions.
Reported-by: Paul Luse <paul.e.luse@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Mon, 21 Mar 2011 13:48:57 +0000 (09:48 -0400)]
NVMe: Change the definition of nvme_user_io
The read and write commands don't define a 'result', so there's no need
to copy it back to userspace.
Remove the ability of the ioctl to submit commands to a different
namespace; it's just asking for trouble, and the use case I have in mind
will be addressed througha different ioctl in the future. That removes
the need for both the block_shift and nsid arguments.
Check that the opcode is one of 'read' or 'write'. Future opcodes may
be added in the future, but we will need a different structure definition
for them.
The nblocks field is redefined to be 0-based. This allows the user to
request the full 65536 blocks.
Don't byteswap the reftag, apptag and appmask. Martin Petersen tells
me these are calculated in big-endian and are transmitted to the device
in big-endian.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Sat, 19 Mar 2011 18:55:38 +0000 (14:55 -0400)]
NVMe: Add compat_ioctl
Make ioctls work for 32-bit applications on 64-bit kernels. The structures
are defined to be the same for both 32- and 64-bit applications, so
we can use the same handler for both.
Reported-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Wed, 16 Mar 2011 20:52:19 +0000 (16:52 -0400)]
NVMe: Simplify queue lookup
Fill in all the num_possible_cpus() entries with duplicate pointers.
This reduces the complexity of the frequently-called get_nvmeq(), as
well as avoiding a bug in it when there are fewer queues than CPUs.
Reported-by: Shane Michael Matthews <shane.matthews@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Wed, 16 Mar 2011 20:43:40 +0000 (16:43 -0400)]
NVMe: Fix off-by-one when filling in PRP lists
If the last element in the PRP list fits on the end of the page, there's
no need to allocate an extra page to put that single element in. It can
fit on the end of the page.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
NVMe: Update admin opcodes to match the 1.0RC spec
Signed-off-by: Krzysztof Wierzbicki <krzysztof.wierzbicki@intel.com> Signed-off-by: Matthew Wilcox <willy@linux.intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Thu, 24 Feb 2011 13:46:00 +0000 (08:46 -0500)]
NVMe: Fix discontiguous accesses
When we submit subsequent portions of the I/O, we need to access the
updated block, not start reading again from the original position.
This was showing up as miscompares in the XFS randholes testcase.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Wed, 23 Feb 2011 20:20:00 +0000 (15:20 -0500)]
NVMe: Handle bios that contain non-virtually contiguous addresses
NVMe scatterlists must be virtually contiguous, like almost all I/Os.
However, when the filesystem lays out files with a hole, it can be that
adjacent LBAs map to non-adjacent virtual addresses. Handle this by
submitting one NVMe command at a time for each virtually discontiguous
range.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Tue, 22 Feb 2011 19:18:30 +0000 (14:18 -0500)]
NVMe: Implement Flush
Linux implements Flush as a bit in the bio. That means there may also be
data associated with the flush; if so the flush should be sent before the
data. To avoid completing the bio twice, I add CMD_CTX_FLUSH to indicate
the completion routine should do nothing.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Tue, 15 Feb 2011 21:16:02 +0000 (16:16 -0500)]
NVMe: Rename nr_queues to nr_io_queues
I got confused about whether this included the admin queue or not, and
had to resort to reading the spec. It doesn't include the admin queue,
so make that clear in the name.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Wed, 2 Mar 2011 23:37:18 +0000 (18:37 -0500)]
NVMe: Add a kthread to handle the congestion list
Instead of trying to resubmit I/Os in the I/O completion path (in
interrupt context), wake up a kthread which will resubmit I/O from
user context. This allows mke2fs to run to completion.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Mon, 14 Feb 2011 20:55:33 +0000 (15:55 -0500)]
NVMe: Handle failures differently in nvme_submit_bio_queue()
Return -EBUSY if the queue is full or -ENOMEM if we failed to allocate
memory (or map a scatterlist). Also use GFP_ATOMIC to allocate the
nvme_bio and move the locking to the callers of nvme_submit_bio_queue().
In nvme_make_request(), don't permit an I/O to jump the queue -- if the
congestion list already has an entry, just add to the tail, rather than
trying to submit.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Thu, 10 Feb 2011 15:47:55 +0000 (10:47 -0500)]
NVMe: Pass the nvme_dev to nvme_free_prps and nvme_setup_prps
We were passing the nvme_queue to access the q_dmadev for the
dma_alloc_coherent calls, but since we moved to the dma pool API,
we really only need the nvme_dev.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Thu, 10 Feb 2011 14:03:06 +0000 (09:03 -0500)]
NVMe: Rename nvme_req_info to nvme_bio
There are too many things called 'info' in this driver. This data
structure is auxiliary information for a struct bio, so call it nvme_bio,
or nbio when used as a variable.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Add a pointer to the nvme_req_info to hold a new data structure
(nvme_prps) which contains a list of the pages allocated to this
particular request for holding PRP list entries. nvme_setup_prps()
now returns this pointer.
To allocate and free the memory used for PRP lists, we need a struct
device, so we need to pass the nvme_queue pointer to many functions
which didn't use to need it.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Mon, 7 Feb 2011 17:45:24 +0000 (12:45 -0500)]
NVMe: Handle the congestion list a little better
In the bio completion handler, check for bios on the congestion list
for this NVM queue. Also, lock the congestion list in the make_request
function as the queue may end up being shared between multiple CPUs.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Sun, 6 Feb 2011 23:30:16 +0000 (18:30 -0500)]
NVMe: Record the timeout for each command
In addition to recording the completion data for each command, record
the anticipated completion time. Choose a timeout of 5 seconds for
normal I/Os and 60 seconds for admin I/Os.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Sun, 6 Feb 2011 14:01:00 +0000 (09:01 -0500)]
NVMe: Need to lock queue during interrupt handling
If we're sharing a queue between multiple CPUs and we cancel a sync I/O,
we must have the queue locked to avoid corrupting the stack of the thread
that submitted the I/O. It turns out this is the same locking that's needed
for the threaded irq handler, so share that code.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Sun, 6 Feb 2011 13:51:15 +0000 (08:51 -0500)]
NVMe: Detect command IDs completing that are out of range
If the adapter completes a command ID that is outside the bounds of
the array, return CMD_CTX_INVALID instead of random data, and print a
message in the sync_completion handler (which is rapidly becoming the
misc completion handler :-)
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Sun, 6 Feb 2011 12:28:06 +0000 (07:28 -0500)]
NVMe: Add a module parameter to use a threaded interrupt
We're currently calling bio_endio from hard interrupt context. This is
not a good idea for preemptible kernels as it will cause longer latencies.
Using a threaded interrupt will run the entire queue processing mechanism
(including bio_endio) in a thread, which can be preempted. Unfortuantely,
it also adds about 7us of latency to the single-I/O case, so make it a
module parameter for the moment.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Fri, 4 Feb 2011 21:03:56 +0000 (16:03 -0500)]
NVMe: Allow fatal signals to interrupt I/O
If the user sends a fatal signal, sleeping in the TASK_KILLABLE state
permits the task to be aborted. The only wrinkle is making sure that
if/when the command completes later that it doesn't upset anything.
Handle this by setting the data pointer to 0, and checking the value
isn't NULL in the sync completion path. Eventually, bios can be cancelled
through this path too. Note that the cmdid isn't freed to prevent reuse.
We should also abort the command in the future, but this is a good start.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Tue, 1 Feb 2011 17:49:38 +0000 (12:49 -0500)]
NVMe: Move sysfs entries to the right place
Because I wasn't setting driverfs_dev, the devices were showing up under
/sys/devices/virtual/block. Now they appear underneath the PCI device
which they belong to.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Wed, 26 Jan 2011 19:34:32 +0000 (14:34 -0500)]
NVMe: Change NVME_IOCTL_GET_RANGE_TYPE to return all the ranges
Factor out most of nvme_identify() into a new nvme_submit_user_admin_command()
function. Change nvme_get_range_type() to call it and change nvme_ioctl to
realise that it's getting back all 64 ranges.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Matthew Wilcox [Thu, 20 Jan 2011 18:42:34 +0000 (13:42 -0500)]
NVMe: Fix admin IRQ claim on real hardware
The admin IRQ is supposed to use the pin-based (or single message MSI)
interrupt. Accomplish this by filling in entry[0]'s vector with the
INTx irq number.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>