Add DH support under kpp api. Drop struct qat_rsa_request and
introduce a more generic struct qat_asym_request and share it
between RSA and DH requests.
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Extend qat driver to use RSA CRT mode when all CRT related components are
present in the private key. Simplify code in qat_rsa_setkey by adding
qat_rsa_clear_ctx.
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
- res = platform_get_resource(pdev, IORESOURCE_MEM, n);
... when != res
- if (res == NULL) { ... \(goto l;\|return ret;\) }
... when != res
+ res = platform_get_resource(pdev, IORESOURCE_MEM, n);
e = devm_ioremap_resource(e1, res);
// </smpl>
Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Anton Blanchard [Thu, 30 Jun 2016 22:19:44 +0000 (08:19 +1000)]
powerpc: define FUNC_START/FUNC_END
gcc provides FUNC_START/FUNC_END macros to help with creating
assembly functions. Mirror these in the kernel so we can more easily
share code between userspace and the kernel. FUNC_END is just a
stub since we don't currently annotate the end of kernel functions.
It might make sense to do a wholesale search and replace, but for
now just create a couple of defines.
Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Thu, 30 Jun 2016 03:00:13 +0000 (11:00 +0800)]
crypto: tcrypt - Do not bail on EINPROGRESS in multibuffer hash test
The multibuffer hash speed test is incorrectly bailing because
of an EINPROGRESS return value. This patch fixes it by setting
ret to zero if it is equal to -EINPROGRESS.
Reported-by: Megha Dey <megha.dey@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Wed, 29 Jun 2016 11:32:28 +0000 (19:32 +0800)]
crypto: rsa-pkcs1pad - Avoid copying output when possible
In the vast majority of cases (2^-32 on 32-bit and 2^-64 on 64-bit)
cases, the result from encryption/signing will require no padding.
This patch makes these two operations write their output directly
to the final destination. Only in the exceedingly rare cases where
fixup is needed to we copy it out and back to add the leading zeroes.
This patch also makes use of the crypto_akcipher_set_crypt API
instead of writing the akcipher request directly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Wed, 29 Jun 2016 11:32:22 +0000 (19:32 +0800)]
lib/mpi: Do not do sg_virt
Currently the mpi SG helpers use sg_virt which is completely
broken. It happens to work with normal kernel memory but will
fail with anything that is not linearly mapped.
This patch fixes this by using the SG iterator helpers.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Wed, 29 Jun 2016 11:32:21 +0000 (19:32 +0800)]
crypto: rsa - Generate fixed-length output
Every implementation of RSA that we have naturally generates
output with leading zeroes. The one and only user of RSA,
pkcs1pad wants to have those leading zeroes in place, in fact
because they are currently absent it has to write those zeroes
itself.
So we shouldn't be stripping leading zeroes in the first place.
In fact this patch makes rsa-generic produce output with fixed
length so that pkcs1pad does not need to do any extra work.
This patch also changes DH to use the new interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Wed, 29 Jun 2016 10:04:13 +0000 (18:04 +0800)]
crypto: api - Add crypto_inst_setname
This patch adds the helper crypto_inst_setname because the current
helper crypto_alloc_instance2 is no longer useful given that we
now look up the algorithm after we allocate the instance object.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Wed, 29 Jun 2016 10:03:59 +0000 (18:03 +0800)]
crypto: aesni - Use crypto_cipher to derive rfc4106 subkey
Currently aesni uses an async ctr(aes) to derive the rfc4106
subkey, which was presumably copied over from the generic rfc4106
code. Over there it's done that way because we already have a
ctr(aes) spawn. But it is simply overkill for aesni since we
have to go get a ctr(aes) from scratch anyway.
This patch simplifies the subkey derivation by using a straight
aes cipher instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Wed, 29 Jun 2016 10:03:47 +0000 (18:03 +0800)]
crypto: ahash - Add padding in crypto_ahash_extsize
The function crypto_ahash_extsize did not include padding when
computing the tfm context size. This patch fixes this by using
the generic crypto_alg_extsize helper.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Tue, 28 Jun 2016 12:33:52 +0000 (20:33 +0800)]
crypto: tcrypt - Fix memory leaks/crashes in multibuffer hash speed test
This patch resolves a number of issues with the mb speed test
function:
* The tfm is never freed.
* Memory is allocated even when we're not using mb.
* When an error occurs we don't wait for completion for other requests.
* When an error occurs during allocation we may leak memory.
* The test function ignores plen but still runs for plen != blen.
* The backlog flag is incorrectly used (may crash).
This patch tries to resolve all these issues as well as making
the code consistent with the existing hash speed testing function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
crypto: tcrypt - Fix mixing printk/pr_err and obvious indentation issues
The recently added test_mb_ahash_speed() has clearly serious coding
style issues. Try to fix some of them:
1. Don't mix pr_err() and printk();
2. Don't wrap strings;
3. Properly align goto statement in if() block;
4. Align wrapped arguments on new line;
5. Don't wrap functions on first argument;
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Megha Dey [Mon, 27 Jun 2016 17:20:08 +0000 (10:20 -0700)]
crypto: sha512-mb - Crypto computation (x4 AVX2)
This patch introduces the assembly routines to do SHA512 computation on
buffers belonging to several jobs at once. The assembly routines are
optimized with AVX2 instructions that have 4 data lanes and using AVX2
registers.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Megha Dey [Mon, 27 Jun 2016 17:20:07 +0000 (10:20 -0700)]
crypto: sha512-mb - Algorithm data structures
This patch introduces the data structures and prototypes of functions
needed for computing SHA512 hash using multi-buffer. Included are the
structures of the multi-buffer SHA512 job, job scheduler in C and x86
assembly.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Megha Dey [Mon, 27 Jun 2016 17:20:06 +0000 (10:20 -0700)]
crypto: sha512-mb - submit/flush routines for AVX2
This patch introduces the routines used to submit and flush buffers
belonging to SHA512 crypto jobs to the SHA512 multibuffer algorithm.
It is implemented mostly in assembly optimized with AVX2 instructions.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Megha Dey [Mon, 27 Jun 2016 17:20:04 +0000 (10:20 -0700)]
crypto: sha512-mb - SHA512 multibuffer job manager and glue code
This patch introduces the multi-buffer job manager which is responsible
for submitting scatter-gather buffers from several SHA512 jobs to the
multi-buffer algorithm. It also contains the flush routine that's called
by the crypto daemon to complete the job when no new jobs arrive before
the deadline of maximum latency of a SHA512 crypto job.
The SHA512 multi-buffer crypto algorithm is defined and initialized in this
patch.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Arnd Bergmann [Mon, 27 Jun 2016 09:17:40 +0000 (11:17 +0200)]
crypto: ux500 - do not build with -O0
The ARM allmodconfig build currently warngs because of the
ux500 crypto driver not working well with the jump label
implementation that we started using for dynamic debug, which
breaks building with 'gcc -O0':
In file included from /git/arm-soc/include/linux/jump_label.h:105:0,
from /git/arm-soc/include/linux/dynamic_debug.h:5,
from /git/arm-soc/include/linux/printk.h:289,
from /git/arm-soc/include/linux/kernel.h:13,
from /git/arm-soc/include/linux/clk.h:16,
from /git/arm-soc/drivers/crypto/ux500/hash/hash_core.c:16:
/git/arm-soc/arch/arm/include/asm/jump_label.h: In function 'hash_set_dma_transfer':
/git/arm-soc/arch/arm/include/asm/jump_label.h:13:7: error: asm operand 0 probably doesn't match constraints [-Werror]
asm_volatile_goto("1:\n\t"
Turning off compiler optimizations has never really been supported
here, and it's only used when debugging the driver. I have not found
a good reason for doing this here, other than a misguided attempt
to produce more readable assembly output. Also, the driver is only
used in obsolete hardware that almost certainly nobody will spend
time debugging any more.
This just removes the -O0 flag from the compiler options.
Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Nishanth Menon [Fri, 24 Jun 2016 16:50:39 +0000 (11:50 -0500)]
hwrng: omap - Fix assumption that runtime_get_sync will always succeed
pm_runtime_get_sync does return a error value that must be checked for
error conditions, else, due to various reasons, the device maynot be
enabled and the system will crash due to lack of clock to the hardware
module.
Before:
12.562784] [00000000] *pgd=fe193835
12.562792] Internal error: : 1406 [#1] SMP ARM
[...]
12.562864] CPU: 1 PID: 241 Comm: modprobe Not tainted 4.7.0-rc4-next-20160624 #2
12.562867] Hardware name: Generic DRA74X (Flattened Device Tree)
12.562872] task: ed51f140 ti: ed44c000 task.ti: ed44c000
12.562886] PC is at omap4_rng_init+0x20/0x84 [omap_rng]
12.562899] LR is at set_current_rng+0xc0/0x154 [rng_core]
[...]
After the proper checks:
[ 94.366705] omap_rng 48090000.rng: _od_fail_runtime_resume: FIXME:
missing hwmod/omap_dev info
[ 94.375767] omap_rng 48090000.rng: Failed to runtime_get device -19
[ 94.382351] omap_rng 48090000.rng: initialization failed.
Fixes: 665d92fa85b5 ("hwrng: OMAP: convert to use runtime PM") Cc: Paul Walmsley <paul@pwsan.com> Signed-off-by: Nishanth Menon <nm@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Megha Dey [Fri, 24 Jun 2016 01:40:48 +0000 (18:40 -0700)]
crypto: sha1-mb - rename sha-mb to sha1-mb
Until now, there was only support for the SHA1 multibuffer algorithm.
Hence, there was just one sha-mb folder. Now, with the introduction of
the SHA256 multi-buffer algorithm , it is logical to name the existing
folder as sha1-mb.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Megha Dey [Fri, 24 Jun 2016 01:40:47 +0000 (18:40 -0700)]
crypto: tcrypt - Add speed tests for SHA multibuffer algorithms
The existing test suite to calculate the speed of the SHA algorithms
assumes serial (single buffer)) computation of data. With the SHA
multibuffer algorithms, we work on 8 lanes of data in parallel. Hence,
the need to introduce a new test suite to calculate the speed for these
algorithms.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Megha Dey [Fri, 24 Jun 2016 01:40:46 +0000 (18:40 -0700)]
crypto: sha256-mb - Crypto computation (x8 AVX2)
This patch introduces the assembly routines to do SHA256 computation
on buffers belonging to several jobs at once. The assembly routines
are optimized with AVX2 instructions that have 8 data lanes and using
AVX2 registers.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Megha Dey [Fri, 24 Jun 2016 01:40:45 +0000 (18:40 -0700)]
crypto: sha256-mb - Algorithm data structures
This patch introduces the data structures and prototypes of
functions needed for computing SHA256 hash using multi-buffer.
Included are the structures of the multi-buffer SHA256 job,
job scheduler in C and x86 assembly.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Megha Dey [Fri, 24 Jun 2016 01:40:44 +0000 (18:40 -0700)]
crypto: sha256-mb - submit/flush routines for AVX2
This patch introduces the routines used to submit and flush buffers
belonging to SHA256 crypto jobs to the SHA256 multibuffer algorithm. It
is implemented mostly in assembly optimized with AVX2 instructions.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Megha Dey [Fri, 24 Jun 2016 01:40:42 +0000 (18:40 -0700)]
crypto: sha256-mb - SHA256 multibuffer job manager and glue code
This patch introduces the multi-buffer job manager which is responsible for
submitting scatter-gather buffers from several SHA256 jobs to the
multi-buffer algorithm. It also contains the flush routine to that's
called by the crypto daemon to complete the job when no new jobs arrive
before the deadline of maximum latency of a SHA256 crypto job.
The SHA256 multi-buffer crypto algorithm is defined and initialized in
this patch.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Florian Fainelli [Thu, 23 Jun 2016 00:27:02 +0000 (17:27 -0700)]
hwrng: bcm2835 - Add support for Broadcom BCM5301x
The Broadcom BCM5301x SoCs (Northstar) utilize the same random number
generator peripheral as Northstar Plus and BCM2835, but just like the
NSP SoC, we need to enable the interrupt.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Stephan Mueller [Wed, 22 Jun 2016 17:26:06 +0000 (19:26 +0200)]
crypto: jitterentropy - use ktime_get_ns as fallback
As part of the Y2038 development, __getnstimeofday is not supposed to be
used any more. It is now replaced with ktime_get_ns. The Jitter RNG uses
the time stamp to measure the execution time of a given code path and
tries to detect variations in the execution time. Therefore, the only
requirement the Jitter RNG has, is a sufficient high resolution to
detect these variations.
The change was tested on x86 to show an identical behavior as RDTSC. The
used test code simply measures the execution time of the heart of the
RNG:
Bin Liu [Wed, 22 Jun 2016 13:23:37 +0000 (16:23 +0300)]
crypto: omap-sham - set sw fallback to 240 bytes
Adds software fallback support for small crypto requests. In these cases,
it is undesirable to use DMA, as setting it up itself is rather heavy
operation. Gives about 40% extra performance in ipsec usecase.
Signed-off-by: Bin Liu <b-liu@ti.com>
[t-kristo@ti.com: dropped the extra traces, updated some comments
on the code] Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tero Kristo [Wed, 22 Jun 2016 13:23:35 +0000 (16:23 +0300)]
crypto: omap-sham - change queue size from 1 to 10
Change crypto queue size from 1 to 10 for omap SHA driver. This should
allow clients to enqueue requests more effectively to avoid serializing
whole crypto sequences, giving extra performance.
Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tero Kristo [Wed, 22 Jun 2016 13:23:34 +0000 (16:23 +0300)]
crypto: omap-sham - use runtime_pm autosuspend for clock handling
Calling runtime PM API for every block causes serious performance hit to
crypto operations that are done on a long buffer. As crypto is performed
on a page boundary, encrypting large buffers can cause a series of crypto
operations divided by page. The runtime PM API is also called those many
times.
Convert the driver to use runtime_pm autosuspend instead, with a default
timeout value of 1 second. This results in upto ~50% speedup.
Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: kpp - Key-agreement Protocol Primitives API (KPP)
Add key-agreement protocol primitives (kpp) API which allows to
implement primitives required by protocols such as DH and ECDH.
The API is composed mainly by the following functions
* set_secret() - It allows the user to set his secret, also
referred to as his private key, along with the parameters
known to both parties involved in the key-agreement session.
* generate_public_key() - It generates the public key to be sent to
the other counterpart involved in the key-agreement session. The
function has to be called after set_params() and set_secret()
* generate_secret() - It generates the shared secret for the session
Other functions such as init() and exit() are provided for allowing
cryptographic hardware to be inizialized properly before use
Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Megha Dey [Wed, 22 Jun 2016 01:21:46 +0000 (18:21 -0700)]
crypto: sha1-mb - async implementation for sha1-mb
Herbert wants the sha1-mb algorithm to have an async implementation:
https://lkml.org/lkml/2016/4/5/286.
Currently, sha1-mb uses an async interface for the outer algorithm
and a sync interface for the inner algorithm. This patch introduces
a async interface for even the inner algorithm.
Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Tue, 21 Jun 2016 08:55:17 +0000 (16:55 +0800)]
crypto: ghash-ce - Fix cryptd reordering
This patch fixes an old bug where requests can be reordered because
some are processed by cryptd while others are processed directly
in softirq context.
The fix is to always postpone to cryptd if there are currently
requests outstanding from the same tfm.
This patch also removes the redundant use of cryptd in the async
init function as init never touches the FPU.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Tue, 21 Jun 2016 08:55:16 +0000 (16:55 +0800)]
crypto: ghash-clmulni - Fix cryptd reordering
This patch fixes an old bug where requests can be reordered because
some are processed by cryptd while others are processed directly
in softirq context.
The fix is to always postpone to cryptd if there are currently
requests outstanding from the same tfm.
This patch also removes the redundant use of cryptd in the async
init function as init never touches the FPU.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Tue, 21 Jun 2016 08:55:15 +0000 (16:55 +0800)]
crypto: ablk_helper - Fix cryptd reordering
This patch fixes an old bug where requests can be reordered because
some are processed by cryptd while others are processed directly
in softirq context.
The fix is to always postpone to cryptd if there are currently
requests outstanding from the same tfm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Tue, 21 Jun 2016 08:55:14 +0000 (16:55 +0800)]
crypto: aesni - Fix cryptd reordering problem on gcm
This patch fixes an old bug where gcm requests can be reordered
because some are processed by cryptd while others are processed
directly in softirq context.
The fix is to always postpone to cryptd if there are currently
requests outstanding from the same tfm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Tue, 21 Jun 2016 08:55:13 +0000 (16:55 +0800)]
crypto: cryptd - Add helpers to check whether a tfm is queued
This patch adds helpers to check whether a given tfm is currently
queued. This is meant to be used by ablk_helper and similar
entities to ensure that no reordering is introduced because of
requests queued in cryptd with respect to requests being processed
in softirq context.
The per-cpu queue length limit is also increased to 1000 in line
with network limits.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Romain Perier [Tue, 21 Jun 2016 08:08:40 +0000 (10:08 +0200)]
crypto: marvell - Increase the size of the crypto queue
Now that crypto requests are chained together at the DMA level, we
increase the size of the crypto queue for each engine. The result is
that as the backlog list is reached later, it does not stop the crypto
stack from sending asychronous requests, so more cryptographic tasks
are processed by the engines.
Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Romain Perier [Tue, 21 Jun 2016 08:08:39 +0000 (10:08 +0200)]
crypto: marvell - Add support for chaining crypto requests in TDMA mode
The Cryptographic Engines and Security Accelerators (CESA) supports the
Multi-Packet Chain Mode. With this mode enabled, multiple tdma requests
can be chained and processed by the hardware without software
intervention. This mode was already activated, however the crypto
requests were not chained together. By doing so, we reduce significantly
the number of IRQs. Instead of being interrupted at the end of each
crypto request, we are interrupted at the end of the last cryptographic
request processed by the engine.
This commits re-factorizes the code, changes the code architecture and
adds the required data structures to chain cryptographic requests
together before sending them to an engine (stopped or possibly already
running).
Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Romain Perier [Tue, 21 Jun 2016 08:08:38 +0000 (10:08 +0200)]
crypto: marvell - Add load balancing between engines
This commits adds support for fine grained load balancing on
multi-engine IPs. The engine is pre-selected based on its current load
and on the weight of the crypto request that is about to be processed.
The global crypto queue is also moved to each engine. These changes are
required to allow chaining crypto requests at the DMA level. By using
a crypto queue per engine, we make sure that we keep the state of the
tdma chain synchronized with the crypto queue. We also reduce contention
on 'cesa_dev->lock' and improve parallelism.
Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Romain Perier [Tue, 21 Jun 2016 08:08:37 +0000 (10:08 +0200)]
crypto: marvell - Move SRAM I/O operations to step functions
Currently the crypto requests were sent to engines sequentially.
This commit moves the SRAM I/O operations from the prepare to the step
functions. It provides flexibility for future works and allow to prepare
a request while the engine is running.
Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Romain Perier [Tue, 21 Jun 2016 08:08:36 +0000 (10:08 +0200)]
crypto: marvell - Add a complete operation for async requests
So far, the 'process' operation was used to check if the current request
was correctly handled by the engine, if it was the case it copied
information from the SRAM to the main memory. Now, we split this
operation. We keep the 'process' operation, which still checks if the
request was correctly handled by the engine or not, then we add a new
operation for completion. The 'complete' method copies the content of
the SRAM to memory. This will soon become useful if we want to call
the process and the complete operations from different locations
depending on the type of the request (different cleanup logic).
Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Romain Perier [Tue, 21 Jun 2016 08:08:35 +0000 (10:08 +0200)]
crypto: marvell - Move tdma chain out of mv_cesa_tdma_req and remove it
Currently, the only way to access the tdma chain is to use the 'req'
union from a mv_cesa_{ablkcipher,ahash}. This will soon become a problem
if we want to handle the TDMA chaining vs standard/non-DMA processing in
a generic way (with generic functions at the cesa.c level detecting
whether the request should be queued at the DMA level or not). Hence the
decision to move the chain field a the mv_cesa_req level at the expense
of adding 2 void * fields to all request contexts (including non-DMA
ones) and to remove the type completly. To limit the overhead, we get
rid of the type field, which can now be deduced from the req->chain.first
value. Once these changes are done the union is no longer needed, so
remove it and move mv_cesa_ablkcipher_std_req and mv_cesa_req
to mv_cesa_ablkcipher_req directly. There are also no needs to keep the
'base' field into the union of mv_cesa_ahash_req, so move it into the
upper structure.
Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Romain Perier [Tue, 21 Jun 2016 08:08:34 +0000 (10:08 +0200)]
crypto: marvell - Copy IV vectors by DMA transfers for acipher requests
Add a TDMA descriptor at the end of the request for copying the
output IV vector via a DMA transfer. This is a good way for offloading
as much as processing as possible to the DMA and the crypto engine.
This is also required for processing multiple cipher requests
in chained mode, otherwise the content of the IV vector would be
overwritten by the last processed request.
Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Romain Perier [Tue, 21 Jun 2016 08:08:33 +0000 (10:08 +0200)]
crypto: marvell - Fix wrong type check in dma functions
So far, the way that the type of a TDMA operation was checked was wrong.
We have to use the type mask in order to get the right part of the flag
containing the type of the operation.
Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Romain Perier [Tue, 21 Jun 2016 08:08:32 +0000 (10:08 +0200)]
crypto: marvell - Check engine is not already running when enabling a req
Add a BUG_ON() call when the driver tries to launch a crypto request
while the engine is still processing the previous one. This replaces
a silent system hang by a verbose kernel panic with the associated
backtrace to let the user know that something went wrong in the CESA
driver.
Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Romain Perier [Tue, 21 Jun 2016 08:08:31 +0000 (10:08 +0200)]
crypto: marvell - Add a macro constant for the size of the crypto queue
Adding a macro constant to be used for the size of the crypto queue,
instead of using a numeric value directly. It will be easier to
maintain in case we add more than one crypto queue of the same size.
Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Dan Carpenter [Fri, 17 Jun 2016 09:16:19 +0000 (12:16 +0300)]
crypto: drbg - fix an error code in drbg_init_sym_kernel()
We accidentally return PTR_ERR(NULL) which is success but we should
return -ENOMEM.
Fixes: 355912852115 ('crypto: drbg - use CTR AES instead of ECB AES') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Jeff Garzik [Fri, 17 Jun 2016 05:00:35 +0000 (10:30 +0530)]
crypto: sha3 - Add SHA-3 hash algorithm
This patch adds the implementation of SHA3 algorithm
in software and it's based on original implementation
pushed in patch https://lwn.net/Articles/518415/ with
additional changes to match the padding rules specified
in SHA-3 specification.
Signed-off-by: Jeff Garzik <jgarzik@redhat.com> Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Arnd Bergmann [Thu, 16 Jun 2016 09:05:46 +0000 (11:05 +0200)]
crypto: caam - fix misspelled upper_32_bits
An endianess fix mistakenly used higher_32_bits() instead of
upper_32_bits(), and that doesn't exist:
drivers/crypto/caam/desc_constr.h: In function 'append_ptr':
drivers/crypto/caam/desc_constr.h:84:75: error: implicit declaration of function 'higher_32_bits' [-Werror=implicit-function-declaration]
*offset = cpu_to_caam_dma(ptr);
Stephan Mueller [Tue, 14 Jun 2016 05:35:37 +0000 (07:35 +0200)]
crypto: drbg - use full CTR AES for update
The CTR DRBG update function performs a full CTR AES operation including
the XOR with "plaintext" data. Hence, remove the XOR from the code and
use the CTR mode to do the XOR.
Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Stephan Mueller [Tue, 14 Jun 2016 05:34:13 +0000 (07:34 +0200)]
crypto: drbg - use CTR AES instead of ECB AES
The CTR DRBG derives its random data from the CTR that is encrypted with
AES.
This patch now changes the CTR DRBG implementation such that the
CTR AES mode is employed. This allows the use of steamlined CTR AES
implementation such as ctr-aes-aesni.
Unfortunately there are the following subtile changes we need to apply
when using the CTR AES mode:
- the CTR mode increments the counter after the cipher operation, but
the CTR DRBG requires the increment before the cipher op. Hence, the
crypto_inc is applied to the counter (drbg->V) once it is
recalculated.
- the CTR mode wants to encrypt data, but the CTR DRBG is interested in
the encrypted counter only. The full CTR mode is the XOR of the
encrypted counter with the plaintext data. To access the encrypted
counter, the patch uses a NULL data vector as plaintext to be
"encrypted".
Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>