Dave Jiang [Thu, 7 Feb 2013 21:38:32 +0000 (14:38 -0700)]
ioatdma: fix race between updating ioat->head and IOAT_COMPLETION_PENDING
There is a race that can hit during __cleanup() when the ioat->head pointer is
incremented during descriptor submission. The __cleanup() can clear the
PENDING flag when it does not see any active descriptors. This causes new
submitted descriptors to be ignored because the COMPLETION_PENDING flag is
cleared. This was introduced when code was adapted from ioatdma v1 to ioatdma
v2. For v2 and v3, IOAT_COMPLETION_PENDING flag will be abandoned and a new
flag IOAT_CHAN_ACTIVE will be utilized. This flag will also be protected under
the prep_lock when being modified in order to avoid the race.
Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Mika Westerberg [Thu, 7 Feb 2013 15:36:28 +0000 (17:36 +0200)]
dw_dmac: add support for Lynxpoint DMA controllers
Intel Lynxpoint PCH Low Power Subsystem has DMA controller to support general
purpose serial buses like SPI, I2C, and HSUART. This controller is enumerated
from ACPI namespace with ACPI ID INTL9C60.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Andy Shevchenko [Fri, 25 Jan 2013 09:48:03 +0000 (11:48 +0200)]
dw_dmac: return proper residue value
Currently the driver returns full length of the active descriptor which is
wrong. We have to go throught the active descriptor and substract the length of
each sent children in the chain from the total length along with the actual
data in the DMA channel registers.
The cyclic case is not handled by this patch due to len field in the descriptor
structure is left untouched by the original code.
Barry Song [Fri, 14 Dec 2012 11:06:58 +0000 (11:06 +0000)]
DMAEngine: sirf: lock the shared registers access in sirfsoc_dma_terminate_all
Just like Russell pointed out in "DMAEngine: sirf: add DMA
pause/resume support" at
http://www.spinics.net/lists/arm-kernel/msg212496.html
here I find sirfsoc_dma_terminate_all() has same problem,
so move the locking to the front of registers access.
Signed-off-by: Barry Song <Baohua.Song@csr.com> Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Andy Shevchenko [Fri, 18 Jan 2013 12:14:15 +0000 (14:14 +0200)]
dw_dmac: move soft LLP code from tasklet to dwc_scan_descriptors
The proper place for the main logic of the soft LLP mode is
dwc_scan_descriptors. It prevents to get the transfer unexpectedly aborted in
case the user calls dwc_tx_status.
Andy Shevchenko [Thu, 17 Jan 2013 08:03:01 +0000 (10:03 +0200)]
dw_dmac: don't exceed AHB master number in dwc_get_data_width
The driver assumes that hardware has two AHB masters which might not be always
true. In such cases we must not exceed number of the AHB masters present in the
hardware. In the proposed scheme in this patch, we would choose the master with
highest possible number whenever we exceed max AHB masters.
Andy Shevchenko [Wed, 16 Jan 2013 13:48:50 +0000 (15:48 +0200)]
dw_dmac: allocate dma descriptors from DMA_COHERENT memory
Currently descriptors are allocated from normal cacheable memory and that slows
down filling the descriptors, as we need to call cache_coherency routines
afterwards. It would be better to allocate memory for these descriptors from
DMA_COHERENT memory. This would make code much cleaner too.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Tested-by: Mika Westerberg <mika.westerberg@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Matt Porter [Thu, 10 Jan 2013 18:41:04 +0000 (13:41 -0500)]
dma: edma: fix slave config dependency on direction
The edma_slave_config() implementation depends on the
direction field such that it will not properly configure
a slave channel when called without direction set.
This fixes the implementation so that the slave config
is copied as is and prep_slave_sg() handles the
direction dependent handling. spi-omap2-mcspi and
omap_hsmmc both expose this bug as they configure the
slave channel config from a common path with an unconfigured
direction field.
Signed-off-by: Matt Porter <mporter@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Fabio Baltieri [Tue, 18 Dec 2012 11:25:14 +0000 (12:25 +0100)]
dmaengine: ste_dma40: add software lli support
This patch add support to manage LLI by SW for select phy channels.
There is a HW issue in certain controllers due to which on certain
occassions HW LLI cannot be used on some physical channels. To avoid
the HW issue on a specific phy channel, the phy channel number can be
added to the list of soft_lli_channels and there after all the transfers
on that channel will use software LLI, for peripheral to memory
transfers.
SoftLLI introduces relink overhead, that could impact performace for
certain use cases.
This is based on a previous patch of Narayanan Gopalakrishnan.
Fabio Baltieri [Thu, 13 Dec 2012 12:46:16 +0000 (13:46 +0100)]
dmaengine: ste_dma40: add a done queue for completed descriptors
This is to keep the active queue for only those transfers which are
actually active in the hardware. Descriptors will be moved to the done
queue after they are completed in the hardware (interrupt handler) but
before all the cleanup work has been completed (tasklet).
Mostly based on a previous patch by Rabin Vincent.
dmaengine: ste_dma40: physical channels number correction
DMAC_ICFG[0:2]=SCHNB only allows to count 'multiple of 4' physical
channels so it was ok with platforms having 8 channels but cannot be
used for next versions (with 10 or 14 channels). This patch allows to
provide the number of physical channels for a DMA device via
platform_data, or still rely on SCHNB if platform_data announces 0
channel.
Rabin Vincent [Thu, 17 May 2012 08:17:38 +0000 (13:47 +0530)]
dmaengine: ste_dma40: don't allow high priority dest event lines
Hardware bug: when a logical channel is triggerred by a high priority
destination event line, an extra packet transaction is generated in case
of important data write response latency on previous logical channel A
and if the source transfer of current logical channel B is already
completed and if no other channel with a higher priority than B is
waiting for execution.
Software workaround: do not set the high priority level for the
destination event lines that trigger logical channels.
Narayanan G [Fri, 20 Jan 2012 08:26:14 +0000 (13:56 +0530)]
dmaengine: ste_dma40: don't check for pm_runtime_suspended()
The check for runtime suspend is not needed during a regular suspend, as
the framework takes care of this. This fixes the issue of DMA driver
not letting the system to go to deepsleep in the first attempt.
Signed-off-by: Narayanan G <narayanan.gopalakrishnan@stericsson.com> Reviewed-by: Rabin Vincent <rabin.vincent@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
Per Forlin [Tue, 18 Oct 2011 16:39:47 +0000 (18:39 +0200)]
dmaengine: ste_dma40: set dma max seg size
Maximum DMA seg size is (0xffff x data_width). If max seg
size is not set it deafults to 64k. This results in failure
if transferring 64k in byte mode.
Large seg sizes may be supported by splitting large transfer.
Per Forlin [Wed, 28 Sep 2011 07:32:20 +0000 (09:32 +0200)]
dmaengine: ste_dma40: use writel_relaxed for lcxa
lcpa and lcla are written often and the cache_sync() overhead in writel
is costly, especially for wlan where every single network packet (in RX
mode) corresponds to a separate DMA transfer.
pl080.h: moved from arm/include/asm/hardware to include/linux/amba/
The header is used by drivers/dma/amba-pl08x.c, which can be compiled
under x86, where PL080 exists under a PCI-to-AMBA bridge. This patche
moves it where it can be accessed by other architectures, and fixes
all users.
Heikki Krogerus [Thu, 10 Jan 2013 08:53:06 +0000 (10:53 +0200)]
dma: dw_dmac: clear suspend bit during termination
The DMA transfer could not be established if previously it was paused and
terminated. In that case the channel's suspend bit remains set that prevents to
transfer anything until channel is resumed.
The patch adds the dwc_chan_resume() call instead of a plain flag assignment.
That clears the DWC_CFGL_CH_SUSP bit as well during termination.
Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Andy Shevchenko [Thu, 10 Jan 2013 09:11:41 +0000 (11:11 +0200)]
dw_dmac: make usage of dw_dma_slave optional
The driver requires a custom slave configuration to be present to be able to
make the slave transfers. Nevertheless, in some cases we need only the request
line as an additional information to the generic slave configuration. The
request line is provided by slave_id parameter of the dma_slave_config
structure. That's why the custom slave configuration could be optional for such
cases.
Andy Shevchenko [Thu, 10 Jan 2013 08:53:03 +0000 (10:53 +0200)]
dw_dmac: store direction in the custom channel structure
Currently the direction value comes from the generic slave configuration
structure and explicitly as a preparation function parameter. The first one is
kinda obsoleted. Thus, we have to store the value passed to the preparation
function somewhere in our structures to be able to use it later. The best
candidate to provide the storage is a custom channel structure. Until now we
still keep and check the direction field of the slave config structure as well.
Andy Shevchenko [Thu, 10 Jan 2013 08:53:02 +0000 (10:53 +0200)]
dw_dmac: call .probe after we have a device in place
If we don't yet have the platform device for the driver when it is being loaded
we fail to probe the driver. So instead of calling probe() directly we call
platform_driver_register(). It will call the probe() immediately if we have the
device but also makes the driver to work on platforms where the platform device
is created later.
Andy Shevchenko [Thu, 10 Jan 2013 08:52:58 +0000 (10:52 +0200)]
dma: dw_dmac: check direction properly in dw_dma_cyclic_prep
dma_transfer_direction is a normal enum. It means we can't usually use the
values as bit fields. Let's adjust this check and move it above the usage of
the direction parameter, due to the nature of the following usage of it.
Andy Shevchenko [Thu, 10 Jan 2013 08:52:57 +0000 (10:52 +0200)]
dma: at_hdmac: check direction properly for cyclic transfers
dma_transfer_direction is a normal enum. It means we can't usually use the
values as bit fields. Let's adjust this check and move it above the usage of
the direction parameter, due to the nature of the following usage of it.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Andy Shevchenko [Wed, 9 Jan 2013 08:17:13 +0000 (10:17 +0200)]
dw_dmac: update tx_node_active in dwc_do_single_block
The "else" keyword in the dw_dma_tasklet is removed as well. All together
simplifies the logic of the code and understanding of what is happening there.
Andy Shevchenko [Wed, 9 Jan 2013 08:17:01 +0000 (10:17 +0200)]
dw_dmac: absence of pdata isn't critical when autocfg is set
The patch allows to probe the device when platform data is absent and hardware
auto configuration is enabled. In that case the default platform data is used
where the channel allocation order is set to ascending, channel priority is set
to ascending, and private property is set to true.
Fabio Estevam [Tue, 8 Jan 2013 01:48:39 +0000 (23:48 -0200)]
dma: mxs-dma: Fix build warnings with W=1
Fix the following warnings when building with W=1 option:
drivers/dma/mxs-dma.c: In function 'mxs_dma_alloc_chan_resources':
drivers/dma/mxs-dma.c:368:25: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
drivers/dma/mxs-dma.c: In function 'mxs_dma_prep_slave_sg':
drivers/dma/mxs-dma.c:481:17: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
drivers/dma/mxs-dma.c:494:3: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
drivers/dma/mxs-dma.c:515:14: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
drivers/dma/mxs-dma.c: In function 'mxs_dma_prep_dma_cyclic':
drivers/dma/mxs-dma.c:563:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
Laxman Dewangan [Sun, 6 Jan 2013 16:22:02 +0000 (21:52 +0530)]
dma: tegra: add support for channel wise pause
NVIDIA's some SoCs like Tegra114 support the channel wise pause control
inplace of global pause which pauses all DMA channels. When SoCs support
the channel wise pause control then it uses the global pause for clock
gating for register access as well as all DMA channel pause. Hence DMA
registers are not accessible if DMAs are globally paused on these new SoCs.
Add support for channel wise pause feature if SoCs support it.
Maciej Sosnowski [Wed, 23 May 2012 15:27:07 +0000 (17:27 +0200)]
dca: check against empty dca_domains list before unregister provider
When providers get blocked unregister_dca_providers() is called ending up
with dca_providers and dca_domain lists emptied. Dca should be prevented from
trying to unregister any provider if dca_domain list is found empty.
Cc: <stable@vger.kernel.org> Reported-by: Jiang Liu <jiang.liu@huawei.com> Tested-by: Gaohuai Han <hangaohuai@huawei.com> Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <djbw@fb.com>
Dave Jiang [Tue, 27 Nov 2012 22:16:08 +0000 (15:16 -0700)]
ioat: remove chanerr mask setting for IOAT v3.x
The existing code set a value in the PCI_CHANERRMSK_INT register
for a workaround to address a pre-silicon bug on the Intel 5520 IO hub that
has been fixed when the hardware was released. There is no need for this
code.
Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <djbw@fb.com>
Dave Jiang [Mon, 3 Dec 2012 23:08:37 +0000 (16:08 -0700)]
ioat: Add alignment workaround for IVB platforms
The PCI IDs for IvyBridge IOAT DMA needs to go into a header file since
dma_v3.c looks them up for certain hardware workarounds. Need to add to the
alignment workaround for IOAT 3.2 since it wasn't fixed in IVB.
Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <djbw@fb.com>
async_tx: fix checking of dma_wait_for_async_tx() return value
dma_wait_for_async_tx() can also return DMA_PAUSED (which
should be considered as error).
Cc: Vinod Koul <vinod.koul@intel.com> Cc: Dan Williams <djbw@fb.com> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <djbw@fb.com>
dmaengine: add cpu_relax() to busy-loop in dma_sync_wait()
Removal of the busy-loop from dma_sync_wait() is not a trivial
task so just add cpu_relax() to the loop for now.
Cc: Vinod Koul <vinod.koul@intel.com> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <djbw@fb.com>
Cc: Vinod Koul <vinod.koul@intel.com> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <djbw@fb.com>
Cc: Vinod Koul <vinod.koul@intel.com> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <djbw@fb.com>
Cc: Dan Williams <djbw@fb.com> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <djbw@fb.com>
ioat3: add missing DMA unmap to ioat_xor_val_self_test()
Make ioat_xor_val_self_test() do DMA unmapping itself and fix handling
of failure cases.
Cc: Dan Williams <djbw@fb.com> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <djbw@fb.com>
Shiraz Hashim [Fri, 9 Nov 2012 15:26:29 +0000 (15:26 +0000)]
dmaengine/dmatest: terminate transfers only in case of errors
dmatest erroneously terminated transfers in normal cases also leading to
test failures for multiple threads over a channel. Fix this and
terminate transfers only in case of errors.
dma: sh: Don't use ENODEV for failing slave lookup
If dmaengine driver's .device_alloc_chan_resources() method returns -ENODEV,
dma_request_channel() will decide, that the driver has been removed and will
remove the device from its list. To prevent this use ENXIO if a slave lookup
fails.
Jon Mason [Sun, 11 Nov 2012 23:03:20 +0000 (23:03 +0000)]
dmatest: Fix NULL pointer dereference on ioat
device_control is an optional and not implemented in all DMA drivers.
Any calls to these will result in a NULL pointer dereference. dmatest
makes two of these calls when completing the kernel thread and removing
the module. These are corrected by calling the dmaengine_device_control
wrapper and checking for a non-existant device_control function pointer
there.
Signed-off-by: Jon Mason <jon.mason@intel.com> CC: Vinod Koul <vinod.koul@intel.com> CC: Dan Williams <djbw@fb.com> Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Barry Song [Tue, 6 Nov 2012 13:32:39 +0000 (21:32 +0800)]
DMAEngine: add dmaengine_prep_interleaved_dma wrapper for interleaved api
commit b14dab792dee(DMAEngine: Define interleaved transfer request api) adds
interleaved request api, this patch adds the dmaengine_prep_interleaved_dma
just like we have dmaengine_prep_ for other modes to avoid drivers call:
xxx_chan->device->device_prep_interleaved_dma().
Signed-off-by: Barry Song <Baohua.Song@csr.com> Cc: Jassi Brar <jaswinder.singh@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Barry Song [Thu, 1 Nov 2012 14:54:43 +0000 (22:54 +0800)]
dmaengine: sirf: enable the driver support new SiRFmarco SoC
The driver supports old up SiRFprimaII SoCs, this patch makes it support
the new SiRFmarco as well.
SiRFmarco, as a SMP SoC, adds new DMA_INT_EN_CLR and DMA_CH_LOOP_CTRL_CLR
registers, to disable IRQ/Channel, we should write 1 to the corresponding
bit in the two CLEAR register.
Tested on SiRFmarco using SPI driver:
$ /mnt/spidev-sirftest -D /dev/spidev32766.0
spi mode: 0
bits per word: 8
max speed: 500000 Hz (500 KHz)
Jon Hunter [Thu, 11 Oct 2012 19:43:01 +0000 (14:43 -0500)]
of: dma: fix protection of DMA controller data stored by DMA helpers
In the current implementation of the OF DMA helpers, read-copy-update (RCU)
linked lists are being used for storing and accessing the DMA controller data.
This part of implementation is based upon V2 of the DMA helpers by Nicolas [1].
During a recent review of RCU, it became apparent that the code is missing the
required rcu_read_lock()/unlock() calls as well as synchronisation calls before
freeing any memory protected by RCU.
Having looked into adding the appropriate RCU calls to protect the DMA data it
became apparent that with the current DMA helper implementation, using RCU is
not as attractive as it may have been before. The main reasons being that ...
1. We need to protect the DMA data around calls to the xlate function.
2. The of_dma_simple_xlate() function calls the DMA engine function
dma_request_channel() which employs a mutex and so could sleep.
3. The RCU read-side critical sections must not sleep and so we cannot hold
an RCU read lock around the xlate function.
Therefore, instead of using RCU, an alternative for this use-case is to employ
a simple spinlock inconjunction with a usage count variable to keep track of
how many current users of the DMA data structure there are. With this
implementation, the DMA data cannot be freed until all current users of the
DMA data are finished.
This patch is based upon the DMA helpers fix for potential deadlock [2].
Jon Hunter [Tue, 25 Sep 2012 18:59:31 +0000 (13:59 -0500)]
of: dma: fix potential deadlock when requesting a slave channel
In the latest version of the OF dma handlers I added support (rather hastily)
to exhaustively search for an available dma slave channel, for the use-case
where we have alternative slave channels that can be used. In the current
implementation a deadlock scenario can occur causing the CPU to loop forever.
The scenario is as follows ...
1. There are alternative channels avaialble
2. The first channel that is found by calling of_dma_find_channel() is not
available and so the call to the xlate function returns NULL. In this case
we will call of_dma_find_channel() again but we will return the same channel
that we found the first time and hence, again the xlate will return NULL and
we will loop here forever.
Fix this potential deadlock by just using a single for-loop and not a for-loop
nested in a do-while loop. This change also replaces the function
of_dma_find_channel() with of_dma_match_channel() which performs a simple check
to see if a DMA channel matches the name specified.
I have tested this implementation on an OMAP4 panda board by adding a dummy
DMA specifier, that will cause the xlate function to return NULL, to the
beginning of a list of DMA specifiers for a DMA client.
Cc: Nicolas Ferre <nicolas.ferre@atmel.com> Cc: Benoit Cousson <b-cousson@ti.com> Cc: Stephen Warren <swarren@nvidia.com> Cc: Grant Likely <grant.likely@secretlab.ca> Cc: Russell King <linux@arm.linux.org.uk> Cc: Rob Herring <rob.herring@calxeda.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Dan Williams <djbw@fb.com> Signed-off-by: Jon Hunter <jon-hunter@ti.com> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
carma-fpga: pass correct flags to ->device_prep_dma_memcpy()
DMA unmapping is handled by a driver so tell fsldma.c driver
(which is the DMA engine driver used by carma-fpga) to skip
unmapping destination and source buffers.
Cc: Ira W. Snyder <iws@ovro.caltech.edu> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <djbw@fb.com>
mtd: fsmc_nand: add missing DMA unmap to dma_xfer()
Make dma_xfer() do DMA unmapping itself and fix handling
of failure cases.
Cc: David Woodhouse <dwmw2@infradead.org> Cc: Vipin Kumar <vipin.kumar@st.com> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <djbw@fb.com>
ioat: add missing DMA unmap to ioat_dma_self_test()
Make ioat_dma_self_test() do DMA unmapping itself and fix handling
of failure cases.
Cc: Dan Williams <djbw@fb.com> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <djbw@fb.com>
Cc: Dan Williams <djbw@fb.com> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <djbw@fb.com>
Akinobu Mita [Sat, 27 Oct 2012 15:49:32 +0000 (00:49 +0900)]
dmatest: adjust invalid module parameters for number of source buffers
DMA Engine test module has module parameters to set the number of source
buffers for xor and pq operations. We can set these values larger than the
maximum number of sources that the device can support. These values are
not adjusted and the unsupported number of source buffers are passed to the
device. But most drivers don't check it, so unexpected results will happen.
This makes an appropriate adjustment for these module parameters before use.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Akinobu Mita [Sat, 27 Oct 2012 15:49:31 +0000 (00:49 +0900)]
dma: amba-pl08x: use vchan_dma_desc_free_list
vchan_dma_desc_free_list() iterates through each virt_dma_desc in the
specified list_head and calls vchan->desc_free().
We can use it instead of repeated execution of pl08x_desc_free() for each
virt_dma_desc in the list_head. Because vchan->desc_free callback is set
as pl08x_desc_free() for amba-pl08x driver.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Andy Shevchenko [Thu, 18 Oct 2012 14:34:11 +0000 (17:34 +0300)]
dw_dmac: change dev_crit to dev_WARN in dwc_handle_error
In case of handling a bad descriptor the dwc_handle_error() will dump a stack
as well. It's a lot more verbose and more likely to get user's attention.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Heikki Krogerus [Thu, 18 Oct 2012 14:34:08 +0000 (17:34 +0300)]
dmaengine: dw_dmac: amend description and indentation
The driver will be used as a core part for various implementations of the
DesignWare DMA device. The patch adjusts description on the top and corrects
paragraph indentation in few places across the code.
Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Viresh Kumar [Tue, 16 Oct 2012 04:19:17 +0000 (09:49 +0530)]
dmaengine: dw_dmac: Enhance device tree support
dw_dmac driver already supports device tree but it used to have its platform
data passed the non-DT way.
This patch does following changes:
- pass platform data via DT, non-DT way still takes precedence if both are used.
- create generic filter routine
- Earlier slave information was made available by slave specific filter routines
in chan->private field. Now, this information would be passed from within dmac
DT node. Slave drivers would now be required to pass bus_id (a string) as
parameter to this generic filter(), which would be compared against the slave
data passed from DT, by the generic filter routine.
- Update binding document
Kees Cook [Tue, 23 Oct 2012 20:01:54 +0000 (13:01 -0700)]
drivers/dma: remove CONFIG_EXPERIMENTAL
This config item has not carried much meaning for a while now and is
almost always enabled by default. As agreed during the Linux kernel
summit, remove it.
CC: Vinod Koul <vinod.koul@intel.com> CC: Dan Williams <djbw@fb.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Matt Porter [Wed, 19 Sep 2012 14:49:48 +0000 (10:49 -0400)]
of: dma: fix typos in generic dma binding definition
Some semicolons were left out in the examples.
The #dma-channels and #dma-requests properties have a prefix
that is, by convention, reserved for cell size properties.
Rename those properties to dma-channels and dma-requests.
Signed-off-by: Matt Porter <mporter@ti.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Jon Hunter <jon-hunter@ti.com> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Jon Hunter [Fri, 14 Sep 2012 22:41:56 +0000 (17:41 -0500)]
of: Add generic device tree DMA helpers
This is based upon the work by Benoit Cousson [1] and Nicolas Ferre [2]
to add some basic helpers to retrieve a DMA controller device_node and the
DMA request/channel information.
Aim of DMA helpers
- The purpose of device-tree is to describe the capabilites of the hardware.
Thinking about DMA controllers purely from the context of the hardware to
begin with, we can describe a device in terms of a DMA controller as
follows ...
1. Number of DMA controllers
2. Number of channels (maybe physical or logical)
3. Mapping of DMA requests signals to DMA controller
4. Number of DMA interrupts
5. Mapping of DMA interrupts to channels
- With the above in mind the aim of the DT DMA helper functions is to extract
the above information from the DT and provide to the appropriate driver.
However, due to the vast number of DMA controllers and not all are using a
common driver (such as DMA Engine) it has been seen that this is not a
trivial task. In previous discussions on this topic the following concerns
have been raised ...
1. How does the binding support devices with multiple DMA controllers?
2. How to support both legacy DMA controllers not using DMA Engine as
well as those that support DMA Engine.
3. When using with DMA Engine how do we support the various
implementations where the opaque filter function parameter differs
between implementations?
4. How do we handle DMA channels that are identified with a string
versus a integer?
- Hence the design of the DMA helpers has to accomodate the above or align on
an agreement what can be or should be supported.
Design of DMA helpers
1. Registering DMA controllers
In the case of DMA controllers that are using DMA Engine, requesting a
channel is performed by calling the following function.
The mask variable is used to match a type of the device controller in a list
of controllers. The filter_fn and filter_param are used to identify the
required dma channel and return a handle to the dma channel of type dma_chan.
From the examples I have seen, the mask and filter_fn are constant
for a given DMA controller and therefore, we can specify these as controller
specific data when registering the DMA controller with the device-tree DMA
helpers.
The filter_param variable is of an unknown type and is typically specific
to the DMA engine implementation for a given DMA controller. To allow some
flexibility in the type and formating of this filter_param we employ an
xlate to translate the device-tree binding information into the appropriate
format. The xlate function used for a DMA controller can also be specified
when registering the DMA controller with the device-tree DMA helpers.
Based upon the above, a function for registering the DMA controller with the
DMA helpers now looks like the below. The data variable is used to pass a
pointer to DMA controller specific data used by the xlate function.
For example, in the case where DMA engine is used, we define the following
structure (that stores the DMA engine capability mask and filter function)
and pass this to the data variable in the above function.
2. Representing and requesting channel information
Please see the dma binding documentation included in this patch for a
description of how DMA controllers and client information should be
represented with device-tree. For more information on how this binding
came about please see [3]. In addition to this, feedback received from
the Linux kernel summit showed a consensus (among those who attended) to
use a name to identify DMA client information [4].
A DMA channel can be requested by calling the following function, where name
is a required parameter used for identifying a DMA channel. This function
has been designed to return a structure of type dma_chan to work with the
DMA engine driver. Note that if DMA engine is used then drivers should be
using the DMA engine API dma_request_slave_channel() (implemented in part 2
of this series, "dmaengine: add helper function to request a slave DMA
channel") which will in turn call the below function if device-tree is
present. The aim being to have a common DMA engine interface regardless of
whether device tree is being used.
These devices present a problem, as there may not be a uniform way to easily
support them with regard to device tree. Ideally, these should be migrated
to DMA engine. However, if this is not possible, then they should still be
able to use this binding, the only constaint imposed by this implementation
is that when requesting a DMA channel via of_dma_request_slave_channel(), it
will return a type of dma_chan.
This implementation has been tested on OMAP4430 using the kernel v3.6-rc5. I
have validated that MMC is working on the PANDA board with this implementation.
My development branch for testing on OMAP can be found here [5].
v6: - minor corrections in DMA binding documentation
v5: - minor update to binding documentation
- added loop to exhaustively search for a slave channel in the case where
there could be alternative channels available
v4: - revert the removal of xlate function from v3
- update the proposed binding format and APIs based upon discussions [3]
v3: - avoid passing an xlate function and instead pass DMA engine parameters
- define number of dma channels and requests in dma-controller node
v2: - remove of_dma_to_resource API
- make property #dma-cells required (no fallback anymore)
- another check in of_dma_xlate_onenumbercell() function
Cc: Nicolas Ferre <nicolas.ferre@atmel.com> Cc: Benoit Cousson <b-cousson@ti.com> Cc: Stephen Warren <swarren@nvidia.com> Cc: Grant Likely <grant.likely@secretlab.ca> Cc: Russell King <linux@arm.linux.org.uk> Cc: Rob Herring <rob.herring@calxeda.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Dan Williams <djbw@fb.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: Jon Hunter <jon-hunter@ti.com> Reviewed-by: Stephen Warren <swarren@wwwdotorg.org> Acked-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Jon Hunter [Fri, 14 Sep 2012 22:41:57 +0000 (17:41 -0500)]
dmaengine: add helper function to request a slave DMA channel
Currently slave DMA channels are requested by calling dma_request_channel()
and requires DMA clients to pass various filter parameters to obtain the
appropriate channel.
With device-tree being used by architectures such as arm and the addition of
device-tree helper functions to extract the relevant DMA client information
from device-tree, add a new function to request a slave DMA channel using
device-tree. This function is currently a simple wrapper that calls the
device-tree of_dma_request_slave_channel() function.
Cc: Nicolas Ferre <nicolas.ferre@atmel.com> Cc: Benoit Cousson <b-cousson@ti.com> Cc: Stephen Warren <swarren@nvidia.com> Cc: Grant Likely <grant.likely@secretlab.ca> Cc: Russell King <linux@arm.linux.org.uk> Cc: Rob Herring <rob.herring@calxeda.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Dan Williams <djbw@fb.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Jon Hunter <jon-hunter@ti.com> Reviewed-by: Stephen Warren <swarren@wwwdotorg.org> Acked-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
leds: leds-gpio: set devm_gpio_request_one() flags param correctly
commit a99d76f leds: leds-gpio: use gpio_request_one
changed the leds-gpio driver to use gpio_request_one() instead
of gpio_request() + gpio_direction_output()
Unfortunately, it also made a semantic change that breaks the
leds-gpio driver.
The gpio_request_one() flags parameter was set to:
GPIOF_DIR_OUT | (led_dat->active_low ^ state)
Since GPIOF_DIR_OUT is 0, the final flags value will just be the
XOR'ed value of led_dat->active_low and state.
This value were used to distinguish between HIGH/LOW output initial
level and call gpio_direction_output() accordingly.
With this new semantic gpio_request_one() will take the flags value
of 1 as a configuration of input direction (GPIOF_DIR_IN) and will
call gpio_direction_input() instead of gpio_direction_output().