Russell King [Sun, 18 Oct 2015 17:31:26 +0000 (18:31 +0100)]
crypto: marvell/cesa - use __le32 for hardware descriptors
Much of the driver uses cpu_to_le32() to convert values for descriptors
to little endian before writing. Use __le32 to define the hardware-
accessed parts of the descriptors, and ensure most places where it's
reasonable to do so use cpu_to_le32() when assigning to these.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Sun, 18 Oct 2015 17:31:20 +0000 (18:31 +0100)]
crypto: marvell/cesa - fix missing cpu_to_le32() in mv_cesa_dma_add_op()
When tdma->src is freed in mv_cesa_dma_cleanup(), we convert the DMA
address from a little-endian value prior to calling dma_pool_free().
However, mv_cesa_dma_add_op() assigns tdma->src without first converting
the DMA address to little endian. Fix this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Sun, 18 Oct 2015 16:51:31 +0000 (17:51 +0100)]
crypto: caam - fix indentation of close braces
The kernel's coding style suggests that closing braces for initialisers
should not be aligned to the open brace column. The CodingStyle doc
shows how this should be done. Remove the additional tab.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Sun, 18 Oct 2015 16:51:25 +0000 (17:51 +0100)]
crypto: caam - only export the state we really need to export
Avoid exporting lots of state by only exporting what we really require,
which is the buffer containing the set of pending bytes to be hashed,
number of pending bytes, the context buffer, and the function pointer
state. This reduces down the exported state size to 216 bytes from
576 bytes.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
caam does not properly calculate the size of the retained state
when non-block aligned hashes are requested - it uses the wrong
buffer sizes, which results in errors such as:
caam_jr 2102000.jr1: 40000501: DECO: desc idx 5: SGT Length Error. The descriptor is trying to read more data than is contained in the SGT table.
0x68 comes from 0x28 (the context size) plus the "in_len" rounded down
to a block size (0x40). in_len comes from 0x26 bytes of unhashed data
from the previous operation, plus the 0x20 bytes from the latest
operation.
which replaces the 0x06 length with the correct 0x26 bytes of previously
unhashed data.
This fixes a previous commit which erroneously "fixed" this due to a
DMA-API bug report; that commit indicates that the bug was caused via a
test_ahash_pnum() function in the tcrypt module. No such function has
ever existed in the mainline kernel. Given that the change in this
commit has been tested with DMA API debug enabled and shows no issue,
I can only conclude that test_ahash_pnum() was triggering that bad
behaviour by CAAM.
Fixes: 7d5196aba3c8 ("crypto: caam - Correct DMA unmap size in ahash_update_ctx()") Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Sun, 18 Oct 2015 16:51:15 +0000 (17:51 +0100)]
crypto: caam - avoid needlessly saving and restoring caam_hash_ctx
When exporting and importing the hash state, we will only export and
import into hashes which share the same struct crypto_ahash pointer.
(See hash_accept->af_alg_accept->hash_accept_parent.)
This means that saving the caam_hash_ctx structure on export, and
restoring it on import is a waste of resources. So, remove this code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Boris Brezillon [Sun, 18 Oct 2015 16:24:57 +0000 (17:24 +0100)]
crypto: marvell/cesa - fix memory leak
To: Boris Brezillon <boris.brezillon@free-electrons.com>,Arnaud Ebalard <arno@natisbad.org>,Thomas Petazzoni <thomas.petazzoni@free-electrons.com>,Jason Cooper <jason@lakedaemon.net>
The local chain variable is not cleaned up if an error occurs in the middle
of DMA chain creation. Fix that by dropping the local chain variable and
using the dreq->chain field which will be cleaned up by
mv_cesa_dma_cleanup() in case of errors.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Reported-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Sun, 18 Oct 2015 16:24:52 +0000 (17:24 +0100)]
crypto: marvell/cesa - fix first-fragment handling in mv_cesa_ahash_dma_last_req()
When adding the software padding, this must be done using the first/mid
fragment mode, and any subsequent operation needs to be a mid-fragment.
Fix this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Sun, 18 Oct 2015 16:24:47 +0000 (17:24 +0100)]
crypto: marvell/cesa - rearrange handling for sw padded hashes
Rearrange the last request handling for hashes which require software
padding.
We prepare the padding to be appended, and then append as much of the
padding to any existing data that's already queued up, adding an
operation block and launching the operation.
Any remainder is then appended as a separate operation.
This ensures that the hardware only ever sees multiples of the hash
block size to be operated on for software padded hashes, thus ensuring
that the engine always indicates that it has finished the calculation.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Sun, 18 Oct 2015 16:24:42 +0000 (17:24 +0100)]
crypto: marvell/cesa - rearrange handling for hw finished hashes
Rearrange the last request handling for hardware finished hashes
by moving the generation of the fragment operation into this path.
This results in a simplified sequence to handle this case, and
allows us to move the software padded case further down into the
function. Add comments describing these parts.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Sun, 18 Oct 2015 16:24:26 +0000 (17:24 +0100)]
crypto: marvell/cesa - ensure iter.base.op_len is the full op length
When we process the last request of data, and the request contains user
data, the loop in mv_cesa_ahash_dma_req_init() marks the first data size
as being iter.base.op_len which does not include the size of the cache
data. This means we end up hashing an insufficient amount of data.
Fix this by always including the cache size in the first operation
length of any request.
This has the effect that for a request containing no user data,
Russell King [Sun, 18 Oct 2015 16:24:21 +0000 (17:24 +0100)]
crypto: marvell/cesa - use presence of scatterlist to determine data load
Use the presence of the scatterlist to determine whether we should load
any new user data to the engine. The following shall always be true at
this point:
iter.base.op_len == 0 === iter.src.sg
In doing so, we can:
1. eliminate the test for iter.base.op_len inside the loop, which
makes the loop operation more obvious and understandable.
2. move the operation generation for the cache-only case.
This prepares the code for the next step in its transformation, and also
uncovers a bug that will be fixed in the next patch.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move the calls to mv_cesa_dma_add_frag() into the parent function,
mv_cesa_ahash_dma_req_init(). This is in preparation to changing
when we generate the operation blocks, as we need to avoid generating
a block for a partial hash block at the end of the user data.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Sun, 18 Oct 2015 16:24:11 +0000 (17:24 +0100)]
crypto: marvell/cesa - always ensure mid-fragments after first-fragment
If we add a template first-fragment operation, always update the
template to be a mid-fragment. This ensures that mid-fragments
always follow on from a first fragment in every case.
This means we can move the first to mid-fragment update code out of
mv_cesa_ahash_dma_add_data().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Sun, 18 Oct 2015 16:24:06 +0000 (17:24 +0100)]
crypto: marvell/cesa - factor out adding an operation and launching it
Add a helper to add the fragment operation block followed by the DMA
entry to launch the operation.
Although at the moment this pattern only strictly appears at one site,
two other sites can be factored as well by slightly changing the order
in which the DMA operations are performed. This should be harmless as
the only thing which matters is to have all the data loaded into SRAM
prior to launching the operation.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Sun, 18 Oct 2015 16:24:01 +0000 (17:24 +0100)]
crypto: marvell/cesa - factor out first fragment decisions to helper
Multiple locations in the driver test the operation context fragment
type, checking whether it is a first fragment or not. Introduce a
mv_cesa_mac_op_is_first_frag() helper, which returns true if the
fragment operation is for a first fragment.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Sun, 18 Oct 2015 16:23:51 +0000 (17:23 +0100)]
crypto: marvell/cesa - ensure template operation is initialised
Ensure that the template operation is fully initialised, otherwise we
end up loading data from the kernel stack into the engines, which can
upset the hash results.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Sun, 18 Oct 2015 16:23:46 +0000 (17:23 +0100)]
crypto: marvell/cesa - fix the bit length endianness
The endianness of the bit length used in the final stage depends on the
endianness of the algorithm - md5 hashes need it to be in little endian
format, whereas SHA hashes need it in big endian format. Use the
previously added algorithm endianness flag to control this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Sun, 18 Oct 2015 16:23:40 +0000 (17:23 +0100)]
crypto: marvell/cesa - add flag to determine algorithm endianness
Rather than determining whether we're using a MD5 hash by looking at
the digest size, switch to a cleaner solution using a per-request flag
initialised by the method type.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Sun, 18 Oct 2015 16:23:35 +0000 (17:23 +0100)]
crypto: marvell/cesa - keep creq->state in CPU endian format at all times
Currently, we read/write the state in CPU endian, but on the final
request, we convert its endian according to the requested algorithm.
(md5 is little endian, SHA are big endian.)
Always keep creq->state in CPU native endian format, and perform the
necessary conversion when copying the hash to the result.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently this driver calls pm_runtime_get_sync() rampantly
but never puts anything back. This makes it impossible for the
device to autosuspend properly; it will remain fully active
after the first use.
Fix in the obvious way.
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Cc: Kukjin Kim <kgene@kernel.org> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Tested-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch implements the AES key wrapping as specified in
NIST SP800-38F and RFC3394.
The implementation covers key wrapping without padding.
IV handling: The caller does not provide an IV for encryption,
but must obtain the IV after encryption which would serve as the first
semblock in the ciphertext structure defined by SP800-38F. Conversely,
for decryption, the caller must provide the first semiblock of the data
as the IV and the following blocks as ciphertext.
The key wrapping is an authenticated decryption operation. The caller
will receive EBADMSG during decryption if the authentication failed.
Albeit the standards define the key wrapping for AES only, the template
can be used with any other block cipher that has a block size of 16
bytes. During initialization of the template, that condition is checked.
Any cipher not having a block size of 16 bytes will cause the
initialization to fail.
Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Boris BREZILLON [Tue, 16 Jun 2015 09:46:46 +0000 (11:46 +0200)]
crypto: testmgr - test IV value after a cipher operation
The crypto drivers are supposed to update the IV passed to the crypto
request before calling the completion callback.
Test for the IV value before considering the test as successful.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Daniel Thompson [Wed, 14 Oct 2015 16:04:55 +0000 (17:04 +0100)]
hwrng: stm32 - Fix build with CONFIG_PM
Commit c6a97c42e399 ("hwrng: stm32 - add support for STM32 HW RNG")
was inadequately tested (actually it was tested quite hard so
incompetent would be a better description that inadequate) and does
not compile on platforms with CONFIG_PM set.
Fix this.
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Sowmini Varadhan [Tue, 13 Oct 2015 14:54:01 +0000 (10:54 -0400)]
crypto: pkcs7 - Fix unaligned access in pkcs7_verify()
On sparc, we see unaligned access messages on each modprobe[-r]:
Kernel unaligned access at TPC[6ad9b4] pkcs7_verify [..]
Kernel unaligned access at TPC[6a5484] crypto_shash_finup [..]
Kernel unaligned access at TPC[6a5390] crypto_shash_update [..]
Kernel unaligned access at TPC[10150308] sha1_sparc64_update [..]
Kernel unaligned access at TPC[101501ac] __sha1_sparc64_update [..]
These ware triggered by mod_verify_sig() invocations of pkcs_verify(), and
are are being caused by an unaligned desc at (sha1, digest_size is 0x14)
desc = digest + digest_size;
To fix this, pkcs7_verify needs to make sure that desc is pointing
at an aligned value past the digest_size, and kzalloc appropriately,
taking alignment values into consideration.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
LABBE Corentin [Mon, 12 Oct 2015 17:47:04 +0000 (19:47 +0200)]
crypto: ux500 - Use devm_xxx() managed function
Using the devm_xxx() managed function to stripdown the error
and remove code.
In the same time, we replace request_mem_region/ioremap by the unified
devm_ioremap_resource() function.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Arnd Bergmann [Mon, 12 Oct 2015 13:52:34 +0000 (15:52 +0200)]
crypto: mxs-dcp - mxs-dcp is an stmp device
The mxs-dcp driver relies on the stmp_reset_block() helper function, which
is provided by CONFIG_STMP_DEVICE. This symbol is always set on MXS,
but the driver can now also be built for MXC (i.MX6), which results
in a built error if no other driver selects STMP_DEVICE:
drivers/built-in.o: In function `mxs_dcp_probe':
vf610-ocotp.c:(.text+0x3df302): undefined reference to `stmp_reset_block'
This adds the 'select', like all other stmp drivers have it.
Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: a2712e6c75f ("crypto: mxs-dcp - Allow MXS_DCP to be used on MX6SL") Acked-by: Marek Vasut <marex@denx.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Daniel Thompson [Mon, 12 Oct 2015 08:21:30 +0000 (09:21 +0100)]
ARM: dts: stm32f429: Adopt STM32 RNG driver
New bindings and driver have been created for STM32 series parts. This
patch integrates this changes.
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Acked-by: Maxime Coquelin <mcoquelin.stm32@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Daniel Thompson [Mon, 12 Oct 2015 08:21:29 +0000 (09:21 +0100)]
hwrng: stm32 - add support for STM32 HW RNG
Add support for STMicroelectronics STM32 random number generator.
The config value defaults to N, reflecting the fact that STM32 is a
very low resource microcontroller platform and unlikely to be targeted
by any "grown up" defconfigs.
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Daniel Thompson [Mon, 12 Oct 2015 08:21:28 +0000 (09:21 +0100)]
dt-bindings: Document the STM32 HW RNG bindings
This adds documentation of device tree bindings for the STM32 hardware
random number generator.
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Acked-by: Maxime Coquelin <mcoquelin.stm32@gmail.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Fri, 9 Oct 2015 20:14:22 +0000 (21:14 +0100)]
crypto: marvell/cesa - factor out common import/export functions
As all the import functions and export functions are virtually
identical, factor out their common parts into a generic
mv_cesa_ahash_import() and mv_cesa_ahash_export() respectively. This
performs the actual import or export, and we pass the data pointers and
length into these functions.
We have to switch a % const operation to do_div() in the common import
function to avoid provoking gcc to use the expensive 64-bit by 64-bit
modulus operation.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Fri, 9 Oct 2015 19:43:48 +0000 (20:43 +0100)]
crypto: marvell/cesa - fix wrong hash results
Attempting to use the sha1 digest for openssh via openssl reveals that
the result from the hash is wrong: this happens when we export the
state from one socket and import it into another via calling accept().
The reason for this is because the operation is reset to "initial block"
state, whereas we may be past the first fragment of data to be hashed.
Arrange for the operation code to avoid the initialisation of the state,
thereby preserving the imported state.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When a AF_ALG fd is accepted a second time (hence hash_accept() is
used), hash_accept_parent() allocates a new private context using
sock_kmalloc(). This context is uninitialised. After use of the new
fd, we eventually end up with the kernel complaining:
where c0627770 is a random address. Poisoning the memory allocated by
the above sock_kmalloc() produces kernel oopses within the marvell hash
code, particularly the interrupt handling.
The following simplfied call sequence occurs:
hash_accept()
crypto_ahash_export()
marvell hash export function
af_alg_accept()
hash_accept_parent() <== allocates uninitialised struct hash_ctx
crypto_ahash_import()
marvell hash import function
hash_ctx contains the struct mv_cesa_ahash_req in its req.__ctx member,
and, as the marvell hash import function only partially initialises
this structure, we end up with a lot of members which are left with
whatever data was in memory prior to sock_kmalloc().
Add zero-initialisation of this structure.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Boris Brezillon <boris.brezillon@free-electronc.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Russell King [Fri, 9 Oct 2015 19:43:38 +0000 (20:43 +0100)]
crypto: marvell/cesa - fix stack smashing in marvell/hash.c
Several of the algorithms in marvell/hash.c have a statesize of zero.
When an AF_ALG accept() on an already-accepted file descriptor to
calls into hash_accept(), this causes:
which proceeds to write to 'state' as if it was a "struct md5_state",
"struct sha1_state" etc. Add the necessary initialisers for the
.statesize member.
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
MAINTAINERS: add maintainers for the Marvell Crypto driver
A new crypto driver for Marvell ARM platforms was added in
drivers/crypto/marvell/ as part of commit f63601fd616ab ("crypto:
marvell/cesa - add a new driver for Marvell's CESA"). This commit adds
the relevant developers to the list of maintainers.
Cc: Boris Brezillon <boris.brezillon@free-electrons.com> Cc: Arnaud Ebalard <arno@natisbad.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Acked-by: Arnaud Ebalard <arno@natisbad.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Haren Myneni [Thu, 8 Oct 2015 20:45:51 +0000 (13:45 -0700)]
crypto: 842 - Add CRC and validation support
This patch adds CRC generation and validation support for nx-842.
Add CRC flag so that nx842 coprocessor includes CRC during compression
and validates during decompression.
Also changes in 842 SW compression to append CRC value at the end
of template and checks during decompression.
Signed-off-by: Haren Myneni <haren@us.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: jitterentropy - remove unnecessary information from a comment
The clocksource does not provide clocksource_register() function since f893598 commit (clocksource: Mostly kill clocksource_register()), so
let's remove unnecessary information about this function from a comment.
Signed-off-by: Alexander Kuleshov <kuleshovmail@gmail.com> Suggested-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tadeusz Struk [Thu, 8 Oct 2015 16:26:55 +0000 (09:26 -0700)]
crypto: akcipher - Changes to asymmetric key API
Setkey function has been split into set_priv_key and set_pub_key.
Akcipher requests takes sgl for src and dst instead of void *.
Users of the API i.e. two existing RSA implementation and
test mgr code have been updated accordingly.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Lee Jones [Wed, 7 Oct 2015 12:23:29 +0000 (13:23 +0100)]
hwrng: st - Improve FIFO size/depth description
The original representation of FIFO size in the driver coupled with the
ambiguity in the documentation meant that it was easy to confuse readers.
This lead to a false positive BUG-find and subsequently time wastage
debugging this phantom issue.
Hopefully this patch can prevent future readers from falling into the
same trap.
Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Lee Jones [Wed, 7 Oct 2015 12:23:28 +0000 (13:23 +0100)]
hwrng: st - Use real-world device timings for timeout
Samples are documented to be available every 0.667us, so in theory
the 8 sample deep FIFO should take 5.336us to fill. However, during
thorough testing, it became apparent that filling the FIFO actually
takes closer to 12us.
Also take into consideration that udelay() can behave oddly i.e. not
delay for as long as requested.
Suggested-by: Russell King <rmk+kernel@arm.linux.org.uk>:
"IIRC, Linus recommends a x2 factor on delays, especially
timeouts generated by these functions.
Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Lee Jones [Wed, 7 Oct 2015 12:23:27 +0000 (13:23 +0100)]
hwrng: st: dt: Fix trivial typo in node address
DT nodes should not append their addresses with '0x'.
Suggested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Catalin Vasile [Fri, 2 Oct 2015 10:13:18 +0000 (13:13 +0300)]
crypto: caam - add support for acipher xts(aes)
Add support for AES working in XEX-based Tweaked-codebook mode with
ciphertext Stealing (XTS)
sector index - HW limitation: CAAM device supports sector index of only
8 bytes to be used for sector index inside IV, instead of whole 16 bytes
received on request. This represents 2 ^ 64 = 16,777,216 Tera of possible
values for sector index.
Signed-off-by: Cristian Hristea <cristi.hristea@gmail.com> Signed-off-by: Horia Geanta <horia.geanta@freescale.com> Signed-off-by: Alex Porosanu <alexandru.porosanu@freescale.com> Signed-off-by: Catalin Vasile <catalin.vasile@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
LABBE Corentin [Fri, 2 Oct 2015 06:01:02 +0000 (08:01 +0200)]
crypto: qce - dma_map_sg can handle chained SG
The qce driver use two dma_map_sg path according to SG are chained
or not.
Since dma_map_sg can handle both case, clean the code with all
references to sg chained.
Thus removing qce_mapsg, qce_unmapsg and qce_countsg functions.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tom Lendacky [Thu, 1 Oct 2015 21:32:50 +0000 (16:32 -0500)]
crypto: ccp - Use module name in driver structures
The convention is to use the name of the module in the driver structures
that are used for registering the device. The CCP module is currently
using a descriptive name. Replace the descriptive name with module name.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tadeusz Struk [Wed, 30 Sep 2015 12:40:00 +0000 (05:40 -0700)]
crypto: qat - remove unneeded variable
Remove unneeded variable val_indx.
Issue found by a static analyzer.
Reported-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
John Haxby [Thu, 24 Sep 2015 17:24:35 +0000 (18:24 +0100)]
crypto: testmgr - Disable fips-allowed for authenc() and des() ciphers
No authenc() ciphers are FIPS approved, nor is ecb(des).
After the end of 2015, ansi_cprng will also be non-approved.
Signed-off-by: John Haxby <john.haxby@oracle.com> Acked-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The sahara driver use two dma_map_sg path according to SG are chained
or not.
Since dma_map_sg can handle both case, clean the code with all
references to sg chained.
Thus removing the sahara_sha_unmap_sg function.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The caam driver use two dma_map_sg path according to SG are chained
or not.
Since dma_map_sg can handle both case, clean the code with all
references to sg chained.
Thus removing dma_map_sg_chained, dma_unmap_sg_chained
and __sg_count functions.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: talitos - dma_map_sg can handle chained SG
The talitos driver use two dma_map_sg path
according to SG are chained or not.
Since dma_map_sg can handle both case, clean the code with all
references to sg chained.
Thus removing talitos_map_sg, talitos_unmap_sg_chain
and sg_count functions.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tadeusz Struk [Tue, 22 Sep 2015 18:57:47 +0000 (11:57 -0700)]
crypto: qat - remove empty functions and turn qat_uregister fn to void
Some code cleanups after crypto API changes:
- Change qat_algs_unregister to a void function to keep it consistent
with qat_asym_algs_unregister.
- Remove empty functions qat_algs_init & qat_algs_exit.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Nicolas Iooss [Sun, 20 Sep 2015 14:42:36 +0000 (16:42 +0200)]
crypto: crc32c-pclmul - use .rodata instead of .rotata
Module crc32c-intel uses a special read-only data section named .rotata.
This section is defined for K_table, and its name seems to be a spelling
mistake for .rodata.
Fixes: 473946e674eb ("crypto: crc32c-pclmul - Shrink K_table to 32-bit words") Signed-off-by: Nicolas Iooss <nicolas.iooss_linux@m4x.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
tim [Wed, 16 Sep 2015 23:35:53 +0000 (16:35 -0700)]
crypto: x86/sha - Restructure x86 sha512 glue code to expose all the available sha512 transforms
Restructure the x86 sha512 glue code so we will expose sha512 transforms
based on SSSE3, AVX or AVX2 as separate individual drivers when cpu
provides support. This will make it easy for alternative algorithms to
be used if desired and makes the code cleaner and easier to maintain.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
tim [Wed, 16 Sep 2015 23:35:23 +0000 (16:35 -0700)]
crypto: x86/sha - Restructure x86 sha256 glue code to expose all the available sha256 transforms
Restructure the x86 sha256 glue code so we will expose sha256 transforms
based on SSSE3, AVX, AVX2 or SHA-NI extension as separate individual
drivers when cpu provides such support. This will make it easy for
alternative algorithms to be used if desired and makes the code cleaner
and easier to maintain.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
tim [Wed, 16 Sep 2015 23:34:53 +0000 (16:34 -0700)]
crypto: x86/sha - Restructure x86 sha1 glue code to expose all the available sha1 transforms
Restructure the x86 sha1 glue code so we will expose sha1 transforms based
on SSSE3, AVX, AVX2 or SHA-NI extension as separate individual drivers
when cpu provides such support. This will make it easy for alternative
algorithms to be used if desired and makes the code cleaner and easier
to maintain.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
tim [Thu, 10 Sep 2015 22:27:26 +0000 (15:27 -0700)]
crypto: x86/sha - Add build support for Intel SHA Extensions optimized SHA1 and SHA256
This patch provides the configuration and build support to
include and build the optimized SHA1 and SHA256 update transforms
for the kernel's crypto library.
Originally-by: Chandramouli Narayanan <mouli_7982@yahoo.com> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the glue code to detect and utilize the Intel SHA
extensions optimized SHA1 and SHA256 update transforms when available.
This code has been tested on Broxton for functionality.
Originally-by: Chandramouli Narayanan <mouli_7982@yahoo.com> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
tim [Thu, 10 Sep 2015 22:27:13 +0000 (15:27 -0700)]
crypto: x86/sha - Intel SHA Extensions optimized SHA256 transform function
This patch includes the Intel SHA Extensions optimized implementation
of SHA-256 update function. This function has been tested on Broxton
platform and measured a speed up of 3.6x over the SSSE3 implementiation
for 4K blocks.
Originally-by: Chandramouli Narayanan <mouli_7982@yahoo.com> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
tim [Thu, 10 Sep 2015 22:26:59 +0000 (15:26 -0700)]
crypto: x86/sha - Intel SHA Extensions optimized SHA1 transform function
This patch includes the Intel SHA Extensions optimized implementation
of SHA-1 update function. This function has been tested on Broxton
platform and measured a speed up of 3.6x over the SSSE3 implementiation
for 4K blocks.
Originally-by: Chandramouli Narayanan <mouli_7982@yahoo.com> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Lee Jones [Thu, 17 Sep 2015 13:45:55 +0000 (14:45 +0100)]
hwrng: st - Add support for ST's HW Random Number Generator
Signed-off-by: Pankaj Dev <pankaj.dev@st.com> Signed-off-by: Lee Jones <lee.jones@linaro.org> Acked-by: Kieran Bingham <kieranbingham@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
... it will fail because the code doesn't currently take the '\n'
into consideration. Well, now it does.
Signed-off-by: Lee Jones <lee.jones@linaro.org> Acked-by: Peter Korsgaard <peter@korsgaard.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In April 2009, commit d405640 ("Driver Core: misc: add node name support
for misc devices.") inadvertently changed the device node name from
/dev/hw_random to /dev/hwrng. Since 6 years has passed since the change
it seems unpractical to change it back, as this node name is probably
considered ABI by now. So instead, we'll just change the Kconfig help
to match the current situation.
NB: It looks like rng-tools have already been updated.
Signed-off-by: Lee Jones <lee.jones@linaro.org> Acked-by: Kieran Bingham <kieranbingham@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In April 2009, commit d405640 ("Driver Core: misc: add node name support
for misc devices.") inadvertently changed the device node name from
/dev/hw_random to /dev/hwrng. Since 6 years has passed since the change
it seems unpractical to change it back, as this node name is probably
considered ABI by now. So instead, we'll just change the documentation
to match the current situation.
NB: It looks like rng-tools have already been updated.
Signed-off-by: Lee Jones <lee.jones@linaro.org> Acked-by: Kieran Bingham <kieranbingham@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tadeusz Struk [Wed, 16 Sep 2015 12:33:06 +0000 (05:33 -0700)]
crypto: qat - Add load balancing across devices
Load balancing of crypto instances only used a single device.
There was no problem with that on PF, but since there is only
one or two instance per VF we need to loadbalance across devices.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit a1efb01feca597b ("jump_label, locking/static_keys: Rename
JUMP_LABEL_TYPE_* and related helpers to the static_key* pattern")
introduced the definition of JUMP_TYPE_MASK in
include/linux/jump_label.h causing the following name collision:
In file included from drivers/crypto/caam/desc_constr.h:7:0,
from drivers/crypto/caam/ctrl.c:15:
drivers/crypto/caam/desc.h:1495:0: warning: "JUMP_TYPE_MASK" redefined
#define JUMP_TYPE_MASK (0x03 << JUMP_TYPE_SHIFT)
^
In file included from include/linux/module.h:19:0,
from drivers/crypto/caam/compat.h:9,
from drivers/crypto/caam/ctrl.c:11:
include/linux/jump_label.h:131:0: note: this is the location of the previous definition
#define JUMP_TYPE_MASK 1UL
As JUMP_TYPE_MASK definition in desc.h is never used, we can safely remove
it to avoid the name collision.