Paul Gortmaker [Tue, 18 Jun 2013 21:10:12 +0000 (17:10 -0400)]
sh: delete __cpuinit usage from all sh files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/sh uses of the __cpuinit macros from
all C files. Currently sh does not have any __CPUINIT used in
assembly files.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Paul Mundt <lethal@linux-sh.org> Cc: linux-sh@vger.kernel.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Paul Gortmaker [Tue, 18 Jun 2013 21:04:52 +0000 (17:04 -0400)]
s390: delete __cpuinit usage from all s390 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/s390 uses of the __cpuinit macros from
all C files. Currently s390 does not have any __CPUINIT used in
assembly files.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: linux390@de.ibm.com Cc: linux-s390@vger.kernel.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Paul Gortmaker [Tue, 18 Jun 2013 20:56:21 +0000 (16:56 -0400)]
blackfin: delete __cpuinit usage from all blackfin files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/blackfin uses of the __cpuinit macros from
all C files. Currently blackfin does not have any __CPUINIT used in
assembly files.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Mike Frysinger <vapier@gentoo.org> Cc: Bob Liu <lliubbo@gmail.com> Cc: Sonic Zhang <sonic.zhang@analog.com> Cc: uclinux-dist-devel@blackfin.uclinux.org Acked-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Paul Gortmaker [Tue, 18 Jun 2013 14:18:31 +0000 (10:18 -0400)]
arm64: delete __cpuinit usage from all users
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/arm64 uses of the __cpuinit macros from
all C files. Currently arm64 does not have any __CPUINIT used in
assembly files.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Paul Gortmaker [Mon, 17 Jun 2013 19:43:14 +0000 (15:43 -0400)]
sparc: delete __cpuinit/__CPUINIT usage from all users
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/sparc uses of the __cpuinit macros from
C files and removes __CPUINIT from assembly files. Note that even
though arch/sparc/kernel/trampoline_64.S has instances of ".previous"
in it, they are all paired off against explicit ".section" directives,
and not implicitly paired with __CPUINIT (unlike mips and arm were).
[1] https://lkml.org/lkml/2013/5/20/589
Cc: "David S. Miller" <davem@davemloft.net> Cc: sparclinux@vger.kernel.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Paul Gortmaker [Mon, 17 Jun 2013 19:43:14 +0000 (15:43 -0400)]
arm: delete __cpuinit/__CPUINIT usage from all ARM users
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
and are flagged as __cpuinit -- so if we remove the __cpuinit from
the arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
related content into no-ops as early as possible, since that will get
rid of these warnings. In any case, they are temporary and harmless.
This removes all the ARM uses of the __cpuinit macros from C code,
and all __CPUINIT from assembly code. It also had two ".previous"
section statements that were paired off against __CPUINIT
(aka .section ".cpuinit.text") that also get removed here.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
and are flagged as __cpuinit -- so if we remove the __cpuinit from
the arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
related content into no-ops as early as possible, since that will get
rid of these warnings. In any case, they are temporary and harmless.
Here, we remove all the MIPS __cpuinit from C code and __CPUINIT
from asm files. MIPS is interesting in this respect, because there
are also uasm users hiding behind their own renamed versions of the
__cpuinit macros.
[1] https://lkml.org/lkml/2013/5/20/589
[ralf@linux-mips.org: Folded in Paul's followup fix.]
Paul Gortmaker [Mon, 17 Jun 2013 19:43:14 +0000 (15:43 -0400)]
parisc: delete __cpuinit usage from all users
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
This removes all the parisc uses of the __cpuinit macros.
[1] https://lkml.org/lkml/2013/5/20/589
Acked-by: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Helge Deller <deller@gmx.de> Cc: linux-parisc@vger.kernel.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Paul Gortmaker [Mon, 17 Jun 2013 19:43:14 +0000 (15:43 -0400)]
powerpc: delete __cpuinit usage from all users
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
This removes all the powerpc uses of the __cpuinit macros. There
are no __CPUINIT users in assembly files in powerpc.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Josh Boyer <jwboyer@gmail.com> Cc: Matt Porter <mporter@kernel.crashing.org> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Paul Gortmaker [Mon, 17 Jun 2013 19:43:14 +0000 (15:43 -0400)]
alpha: delete __cpuinit usage from all users
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
This removes all the alpha uses of the __cpuinit macros.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Paul Gortmaker [Wed, 19 Jun 2013 23:30:48 +0000 (19:30 -0400)]
modpost: remove all traces of cpuinit/cpuexit sections
Delete all audit rules that were checking how the .cpuXYZ
related sections were inter-operating with other __init
like sections, now that __cpuinit is gone. Update the linker
script to not have any knowledge of .cpuinit sections.
[lds.h update courtesy of Ralf Baechle <ralf@linux-mips.org>]
Cc: Arnd Bergmann <arnd@arndb.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Paul Gortmaker [Mon, 17 Jun 2013 22:34:14 +0000 (18:34 -0400)]
init.h: remove __cpuinit sections from the kernel
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
As an interim step, we can dummy out the macros to be no-ops, and
this will allow us to avoid a giant tree-wide patch, and instead
we can feed in smaller chunks mainly via the arch/ trees. This
is in keeping with commit 78d86c213f28193082b5d8a1a424044b7ba406f1
("init.h: Remove __dev* sections from the kernel")
We don't strictly need to dummy out the macros to do this, but if
we don't then some harmless section mismatch warnings may temporarily
result. For example, notify_cpu_starting() and cpu_up() are arch
independent (kernel/cpu.c) and are flagged as __cpuinit. And hence
the calling functions in the arch specific code are also expected
to be __cpuinit -- if not, then we get the section mismatch warning.
Two of the three __CPUINIT variants are not used whatsoever, and
so they are simply removed directly at this point in time.
[1] https://lkml.org/lkml/2013/5/20/589
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Peng Tao [Thu, 27 Jun 2013 23:54:26 +0000 (09:54 +1000)]
staging/lustre: replace num_physpages with totalram_pages
The global variable num_physpages is going away. Replace it
with totalram_pages.
Signed-off-by: Peng Tao <tao.peng@emc.com> Cc: Jiang Liu <jiang.liu@huawei.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Dave Chinner <dchinner@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Peng Tao [Thu, 27 Jun 2013 23:54:26 +0000 (09:54 +1000)]
staging/lustre/libcfs: cleanup linux-mem.h
remove shrinker related wrappers.
Signed-off-by: Peng Tao <tao.peng@emc.com> Signed-off-by: Andreas Dilger <andreas.dilger@intel.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Dave Chinner <dchinner@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Peng Tao [Thu, 27 Jun 2013 23:54:26 +0000 (09:54 +1000)]
staging/lustre/ptlrpc: convert to new shrinker API
Convert sptlrpc encode pool shrinker to use scan/count API.
Signed-off-by: Peng Tao <tao.peng@emc.com> Signed-off-by: Andreas Dilger <andreas.dilger@intel.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Dave Chinner <dchinner@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Peng Tao [Thu, 27 Jun 2013 23:54:26 +0000 (09:54 +1000)]
staging/lustre/obdclass: convert lu_object shrinker to count/scan API
convert lu_object shrinker to new count/scan API.
Signed-off-by: Peng Tao <tao.peng@emc.com> Signed-off-by: Andreas Dilger <andreas.dilger@intel.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Dave Chinner <dchinner@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Peng Tao [Thu, 27 Jun 2013 23:54:25 +0000 (09:54 +1000)]
staging/lustre/ldlm: convert to shrinkers to count/scan API
convert ldlm shrinker to new count/scan API.
Signed-off-by: Peng Tao <tao.peng@emc.com> Signed-off-by: Andreas Dilger <andreas.dilger@intel.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Dave Chinner <dchinner@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Daniel Tang [Thu, 27 Jun 2013 23:54:25 +0000 (09:54 +1000)]
scripts/sortextable.c: fix building on non-Linux systems
scripts/sortextable.c fails to compile on non-Linux systems due to the
missing 'linux/types.h' header.
Unless I'm missing something obvious, including the standard 'inttypes.h'
header instead and using uintX_t types instead of __uX types does the
exact same job and doesn't break compilation on non-Linux systems.
Signed-off-by: Daniel Tang <dt.tangr@gmail.com> Cc: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Dan Carpenter [Thu, 27 Jun 2013 23:54:25 +0000 (09:54 +1000)]
lib/scatterlist: error handling in __sg_alloc_table()
I was reviewing code which I suspected might allocate a zero size SG
table. That will cause memory corruption. Also we can't return before
doing the memset or we could end up using uninitialized memory in the
cleanup path.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Cc: Akinobu Mita <akinobu.mita@gmail.com> Cc: Imre Deak <imre.deak@intel.com> Cc: Tejun Heo <tj@kernel.org> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Maxim Levitsky <maximlevitsky@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Akinobu Mita [Thu, 27 Jun 2013 23:54:24 +0000 (09:54 +1000)]
scsi_debug: fix do_device_access() with wrap around range
do_device_access() is a function that abstracts copying SG list from/to
ramdisk storage (fake_storep).
It must deal with the ranges exceeding actual fake_storep size, because
such ranges are valid if virtual_gb is set greater than zero, and they
should be treated as fake_storep is repeatedly mirrored up to virtual
size.
Unfortunately, it can't deal with the range which wraps around the end of
fake_storep. A wrap around range is copied by two
sg_copy_{from,to}_buffer() calls, but sg_copy_{from,to}_buffer() can't
copy from/to in the middle of SG list, therefore the second call can't
copy correctly.
This fixes it by using sg_pcopy_{from,to}_buffer() that can copy from/to
the middle of SG list.
This also simplifies the assignment of sdb->resid in
fill_from_dev_buffer(). Because fill_from_dev_buffer() is now only called
once per command execution cycle. So it is not necessary to take care to
decrease sdb->resid if fill_from_dev_buffer() is called more than once.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: "James E.J. Bottomley" <JBottomley@parallels.com> Cc: Douglas Gilbert <dgilbert@interlog.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Horia Geanta <horia.geanta@freescale.com> Cc: Imre Deak <imre.deak@intel.com> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Akinobu Mita [Thu, 27 Jun 2013 23:54:24 +0000 (09:54 +1000)]
lib/scatterlist: introduce sg_pcopy_from_buffer() and sg_pcopy_to_buffer()
The only difference between sg_pcopy_{from,to}_buffer() and
sg_copy_{from,to}_buffer() is an additional argument that specifies the
number of bytes to skip the SG list before copying.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: "James E.J. Bottomley" <JBottomley@parallels.com> Cc: Douglas Gilbert <dgilbert@interlog.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Horia Geanta <horia.geanta@freescale.com> Cc: Imre Deak <imre.deak@intel.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Akinobu Mita [Thu, 27 Jun 2013 23:54:23 +0000 (09:54 +1000)]
lib/scatterlist: factor out sg_miter_get_next_page() from sg_miter_next()
This patchset introduces sg_pcopy_from_buffer() and sg_pcopy_to_buffer(),
which copy data between a linear buffer and an SG list.
The only difference between sg_pcopy_{from,to}_buffer() and
sg_copy_{from,to}_buffer() is an additional argument that specifies the
number of bytes to skip the SG list before copying.
The main reason for introducing these functions is to fix a problem in
scsi_debug module. And there is a local function in crypto/talitos
module, which can be replaced by sg_pcopy_to_buffer().
This patch:
sg_miter_get_next_page() is used to proceed page iterator to the next page
if necessary, and will be used to implement the variants of
sg_copy_{from,to}_buffer() later.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Acked-by: Tejun Heo <tj@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: Imre Deak <imre.deak@intel.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: "James E.J. Bottomley" <JBottomley@parallels.com> Cc: Douglas Gilbert <dgilbert@interlog.com> Cc: Horia Geanta <horia.geanta@freescale.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Chanho Min [Thu, 27 Jun 2013 23:54:23 +0000 (09:54 +1000)]
crypto: add lz4 Cryptographic API
Add support for lz4 and lz4hc compression algorithm using the lib/lz4/*
codebase.
Signed-off-by: Chanho Min <chanho.min@lge.com> Cc: "Darrick J. Wong" <djwong@us.ibm.com> Cc: Bob Pearson <rpearson@systemfabricworks.com> Cc: Richard Weinberger <richard@nod.at> Cc: Herbert Xu <herbert@gondor.hengli.com.au> Cc: Yann Collet <yann.collet.73@gmail.com> Cc: Kyungsik Lee <kyungsik.lee@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Chanho Min [Thu, 27 Jun 2013 23:54:22 +0000 (09:54 +1000)]
lib: add lz4 compressor module
This patchset is for supporting LZ4 compression and the crypto API using it.
As shown below, the size of data is a little bit bigger but compressing
speed is faster under the enabled unaligned memory access. We can use lz4
de/compression through crypto API as well. Also, It will be useful for
another potential user of lz4 compression.
lz4 Compression Benchmark:
Compiler: ARM gcc 4.6.4
ARMv7, 1 GHz based board
Kernel: linux 3.4
Uncompressed data Size: 101 MB
Compressed Size compression Speed
LZO 72.1MB 32.1MB/s, 33.0MB/s(UA)
LZ4 75.1MB 30.4MB/s, 35.9MB/s(UA)
LZ4HC 59.8MB 2.4MB/s, 2.5MB/s(UA)
- UA: Unaligned memory Access support
- Latest patch set for LZO applied
This patch:
Add support for LZ4 compression in the Linux Kernel. LZ4 Compression APIs
for kernel are based on LZ4 implementation by Yann Collet and were changed
for kernel coding style.
lz4_compress() support basic lz4 compression whereas lz4hc_compress()
support high compression or CPU performance get lower but compression
ratio get higher. Also, we require the pre-allocated working memory with
the defined size and destination buffer must be allocated with the size of
lz4_compressbound.
Signed-off-by: Chanho Min <chanho.min@lge.com> Cc: "Darrick J. Wong" <djwong@us.ibm.com> Cc: Bob Pearson <rpearson@systemfabricworks.com> Cc: Richard Weinberger <richard@nod.at> Cc: Herbert Xu <herbert@gondor.hengli.com.au> Cc: Yann Collet <yann.collet.73@gmail.com> Cc: Kyungsik Lee <kyungsik.lee@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kyungsik Lee [Thu, 27 Jun 2013 23:54:21 +0000 (09:54 +1000)]
arm: Remove enforced Os flag for LZ4 decompressor
-Os is enforced here, based on the test result of decompression time
below, slightly faster than -O2.
But further tests with UA show that using -O2 will be the right choice
especially in the case of the unaligned access enabled and the gap,
few counts in the normal decompression mode is small enough to remove -Os.
Decompression Time(counts)
Normal UA enabled
-Os 6717 3447
-O2 6720 2728
Note: ARM v7, Kernel 3.4
counter freq. = 32768 HZ
UA(Unaligned Access)
gcc version 4.6.2
Signed-off-by: Kyungsik Lee <kyungsik.lee@lge.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Borislav Petkov <bp@alien8.de> Cc: Florian Fainelli <florian@openwrt.org> Cc: Yann Collet <yann.collet.73@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kyungsik Lee [Thu, 27 Jun 2013 23:54:21 +0000 (09:54 +1000)]
kbuild: fix for updated LZ4 tool with the new streaming format
LZ4 has been updated with LZ4 Streaming Format specification(v1.3).
lz4demo is replaced by lz4c. lz4c supports both the new streaming and
legacy format with -l option.
This patch makes use of lz4c to support legacy format which is
used for LZ4 De/compression in the linux kernel.
Link: https://code.google.com/p/lz4/source/checkout Signed-off-by: Kyungsik Lee <kyungsik.lee@lge.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Borislav Petkov <bp@alien8.de> Cc: Florian Fainelli <florian@openwrt.org> Cc: Yann Collet <yann.collet.73@gmail.com> Cc: Chanho Min <chanho.min@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2. ARMv7, 1.7GHz based board
Kernel: linux 3.7
Uncompressed Kernel Size: 14MB
Compressed Size Decompression Speed
LZO 6.0MB 34.1MB/s, 52.2MB/s(UA)
LZ4 6.5MB 86.7MB/s
- UA: Unaligned memory Access support
- Latest patch set for LZO applied
This patch set is for adding support for LZ4-compressed Kernel. LZ4 is a
very fast lossless compression algorithm and it also features an extremely
fast decoder [1].
But we have five of decompressors already and one question which does
arise, however, is that of where do we stop adding new ones? This issue
had been discussed and came to the conclusion [2].
Russell King said that we should have:
- one decompressor which is the fastest
- one decompressor for the highest compression ratio
- one popular decompressor (eg conventional gzip)
If we have a replacement one for one of these, then it should do exactly
that: replace it.
The benchmark shows that an 8% increase in image size vs a 66% increase in
decompression speed compared to LZO(which has been known as the fastest
decompressor in the Kernel). Therefore the "fast but may not be small"
compression title has clearly been taken by LZ4 [3].
Chanho Min [Thu, 27 Jun 2013 23:54:20 +0000 (09:54 +1000)]
lib: add weak clz/ctz functions
Some architectures need __c[lt]z[sd]i2() for __builtin_c[lt]z[ll] and It
causes build failure. They can be implemented using the fls()/__ffs() and
overridden by linking arch-specific versions may not be implemented yet.
This is required by "lib: add lz4 compressor module".
Reference: https://lkml.org/lkml/2013/4/18/603
Signed-off-by: Chanho Min <chanho.min@lge.com> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Cc: "Darrick J. Wong" <djwong@us.ibm.com> Cc: Bob Pearson <rpearson@systemfabricworks.com> Cc: Richard Weinberger <richard@nod.at> Cc: Herbert Xu <herbert@gondor.hengli.com.au> Cc: Yann Collet <yann.collet.73@gmail.com> Cc: Kyungsik Lee <kyungsik.lee@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Robin Holt [Thu, 27 Jun 2013 23:54:19 +0000 (09:54 +1000)]
reboot: move arch/x86 reboot= handling to generic kernel
Merge together the unicore32, arm, and x86 reboot= command line parameter
handling.
Signed-off-by: Robin Holt <holt@sgi.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Russ Anderson <rja@sgi.com> Cc: Robin Holt <holt@sgi.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Robin Holt <holt@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Reported-by: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Robin Holt <holt@sgi.com> Cc: Russ Anderson <rja@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Robin Holt [Thu, 27 Jun 2013 23:54:18 +0000 (09:54 +1000)]
reboot: arm: change reboot_mode to use enum reboot_mode
Preparing to move the parsing of reboot= to generic kernel code forces the
change in reboot_mode handling to use the enum.
Signed-off-by: Robin Holt <holt@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Russ Anderson <rja@sgi.com> Cc: Robin Holt <holt@sgi.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Robin Holt [Thu, 27 Jun 2013 23:54:18 +0000 (09:54 +1000)]
reboot: arm: prepare reboot_mode for moving to generic kernel code
Prepare for the moving the parsing of reboot= to the generic kernel code
by making reboot_mode into a more generic form.
Signed-off-by: Robin Holt <holt@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Russ Anderson <rja@sgi.com> Cc: Robin Holt <holt@sgi.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Robin Holt [Thu, 27 Jun 2013 23:54:18 +0000 (09:54 +1000)]
reboot: arm: remove unused restart_mode fields from some arm subarchs
These restart_mode fields are not used at all. Remove them to make moving
the reboot= cmdline options to the general kernel easier.
Signed-off-by: Robin Holt <holt@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Russ Anderson <rja@sgi.com> Cc: Robin Holt <holt@sgi.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Robin Holt [Thu, 27 Jun 2013 23:54:17 +0000 (09:54 +1000)]
reboot: unicore32: prepare reboot_mode for moving to generic kernel code
Prepare for the moving the parsing of reboot= to the generic kernel code
by making reboot_mode into a more generic form.
Signed-off-by: Robin Holt <holt@sgi.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Russ Anderson <rja@sgi.com> Cc: Robin Holt <holt@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: H. Peter Anvin <hpa@zytor.com> Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Robin Holt [Thu, 27 Jun 2013 23:54:17 +0000 (09:54 +1000)]
reboot: x86: prepare reboot_mode for moving to generic kernel code
Prepare for the moving the parsing of reboot= to the generic kernel code
by making reboot_mode into a more generic form.
Signed-off-by: Robin Holt <holt@sgi.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Miguel Boton <mboton.lkml@gmail.com> Cc: Russ Anderson <rja@sgi.com> Cc: Robin Holt <holt@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Robin Holt [Thu, 27 Jun 2013 23:54:17 +0000 (09:54 +1000)]
reboot: checkpatch.pl the new kernel/reboot.c file
Get the new file to pass scripts/checkpatch.pl
Signed-off-by: Robin Holt <holt@sgi.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Russ Anderson <rja@sgi.com> Cc: Robin Holt <holt@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Robin Holt [Thu, 27 Jun 2013 23:54:16 +0000 (09:54 +1000)]
reboot: move shutdown/reboot related functions to kernel/reboot.c
This patch is preparatory. It moves reboot related syscall, etc functions
from kernel/sys.c to kernel/reboot.c.
Signed-off-by: Robin Holt <holt@sgi.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Russ Anderson <rja@sgi.com> Cc: Robin Holt <holt@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Remove the prior patch's #define for easier backporting to the stable
releases.
Signed-off-by: Robin Holt <holt@sgi.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Russ Anderson <rja@sgi.com> Cc: Robin Holt <holt@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kevin Hao [Thu, 27 Jun 2013 23:54:16 +0000 (09:54 +1000)]
kernel/resource.c: remove the unneeded assignment in function __find_resource
This line was introduced by fcb11918 ("resources: add arch hook for
preventing allocation in reserved areas"). But the struct tmp was already
assigned to *new in the above line, so this seems superfluous. Just
remove it.
Signed-off-by: Kevin Hao <haokexin@gmail.com> Cc: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Ingo Molnar [Thu, 27 Jun 2013 23:54:15 +0000 (09:54 +1000)]
relay: fix timer madness
When I'm using below ktap script to tracing all event tracepoints, without
this patch, the system will hang in few seconds, the patch indeed fix the
problem as the changelog pointed.
This patch is old, I can found the original patch discussion in 2007.
http://marc.info/?l=linux-kernel&m=118544794717162&w=2 (In that mail
thread, the patch didn't fix that problem, but it fix the problem I
encountered now)
Ingo's original changelog:
Remove timer calls (!!!) from deep within the tracing infrastructure.
This was totally bogus code that can cause lockups and worse.
Poll the buffer every 2 jiffies for now.
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
drivers/w1/slaves/w1_ds2408.c: add magic sequence to disable P0 test mode
Power-up timing
The DS2408 is sensitive to the power-on slew rate and can inadvertently
power up with a test mode feature enabled. When this occurs, the P0 port
does not respond to the Channel Access Write command. For most reliable
operation, it is recommended to disable the test mode after every power-on
reset using the Disable Test Mode sequence shown below. The 64-bit ROM
code must be transmitted in the same bit sequence as with the Match ROM
command, i.e., least significant bit first. This precaution is
recommended in parasite power mode (VCC pin connected to GND) as well as
with VCC power.
Disable Test Mode:
RST,PD,96h,<64-bit DS2408 ROM Code>,3Ch,RST,PD
Jan Luebbe [Thu, 27 Jun 2013 23:54:13 +0000 (09:54 +1000)]
pps-gpio: add device-tree binding and support
Instead of allocating a struct pps_gpio_platform_data in the DT case,
store the necessary information in struct pps_gpio_device_data itself.
This avoids an additional allocation and the ifdef. It also gets rid of
some indirection.
Also use dev_err instead of pr_err in the changed code.
Signed-off-by: Jan Luebbe <jlu@pengutronix.de> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Rodolfo Giometti <giometti@enneenne.com> Cc: Grant Likely <grant.likely@linaro.org> Cc: Rob Herring <rob.herring@calxeda.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Paul Clements [Thu, 27 Jun 2013 23:54:11 +0000 (09:54 +1000)]
nbd: correct disconnect behavior
Currently, when a disconnect is requested by the user (via NBD_DISCONNECT
ioctl) the return from NBD_DO_IT is undefined (it is usually one of
several error codes). This means that nbd-client does not know if a
manual disconnect was performed or whether a network error occurred.
Because of this, nbd-client's persist mode (which tries to reconnect after
error, but not after manual disconnect) does not always work correctly.
This change fixes this by causing NBD_DO_IT to always return 0 if a user
requests a disconnect. This means that nbd-client can correctly either
persist the connection (if an error occurred) or disconnect (if the user
requested it).
Signed-off-by: Paul Clements <paul.clements@steeleye.com> Acked-by: Rob Landley <rob@landley.net> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Michal Belczyk [Thu, 27 Jun 2013 23:54:11 +0000 (09:54 +1000)]
nbd: remove bogus BUG_ON in NBD_CLEAR_QUE
The NBD_CLEAR_QUE ioctl has been deprecated for quite some time (its job
is now done by two other ioctls). We should stop trying to make bogus
assertions in it. Also, user-level code should remove calls to
NBD_CLEAR_QUE, ASAP.
Signed-off-by: Michal Belczyk <belczyk@bsd.krakow.pl> Signed-off-by: Paul Clements <paul.clements@steeleye.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Wu Fengguang [Thu, 27 Jun 2013 23:54:10 +0000 (09:54 +1000)]
drivers/rapidio/rio-scan.c: make functions static
sparse warnings:
drivers/rapidio/rio-scan.c:1143:5: sparse: symbol 'rio_enum_mport' was not declared. Should it be static?
drivers/rapidio/rio-scan.c:1246:5: sparse: symbol 'rio_disc_mport' was not declared. Should it be static?
Remove the driver for Tsi500 Parallel RapidIO switch because this device
has not been available for several years. Since the first introduction of
Tsi500, the parallel RapidIO interface was replaced by the serial RapidIO
(sRIO) and therefore there is no value in keeping this driver.
Signed-off-by: Alexandre Bounine <alexandre.bounine@idt.com> Cc: Matt Porter <mporter@kernel.crashing.org> Cc: Li Yang <leoli@freescale.com> Cc: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
partitions/msdos: enumerate also AIX LVM partitions
Graft AIX partitions enumeration into partitions/msdos.c
There is already a AIX disks detection logic in msdos.c. When an AIX disk
has been found, and if configured to, call the aix partitions recognizer.
This avoids removal of AIX disks protection from msdos.c, avoids code
duplication, and ensures that AIX partitions enumeration is called before
plain msdos partitions enumeration.
Signed-off-by: Philippe De Muyter <phdm@macqel.be> Cc: Karel Zak <kzak@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
partitions-add-aix-lvm-partition-support-files: add the AIX_PARTITION entry
This is the final patch enabling a user to select AIX lvm partitions
detection.
Signed-off-by: Philippe De Muyter <phdm@macqel.be> Cc: Karel Zak <kzak@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
partitions-add-aix-lvm-partition-support-files: compile aix.c if configured
Signed-off-by: Philippe De Muyter <phdm@macqel.be> Cc: Karel Zak <kzak@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
ERROR: spaces required around that '+=' (ctx:WxV)
#137: FILE: block/partitions/aix.c:113:
+ totalreadcount +=copied;
^
ERROR: do not use assignment in if condition
#235: FILE: block/partitions/aix.c:211:
+ if (vgda_sector && (d = read_part_sector(state, vgda_sector, §))) {
ERROR: do not use assignment in if condition
#244: FILE: block/partitions/aix.c:220:
+ if (numlvs && (d = read_part_sector(state, vgda_sector + 1, §))) {
WARNING: line over 80 characters
#252: FILE: block/partitions/aix.c:228:
+ for (i = 0; foundlvs < numlvs && i < state->limit; i += 1) {
WARNING: line over 80 characters
#294: FILE: block/partitions/aix.c:270:
+ (i + 1 - lp_ix) * pp_blocks_size + psn_part1,
WARNING: line over 80 characters
#295: FILE: block/partitions/aix.c:271:
+ lvip[lv_ix].pps_per_lv * pp_blocks_size);
WARNING: line over 80 characters
#296: FILE: block/partitions/aix.c:272:
+ snprintf(tmp, sizeof(tmp), " <%s>\n", n[lv_ix].name);
WARNING: printk() should include KERN_ facility level
#306: FILE: block/partitions/aix.c:282:
+ printk("partition %s (%u pp's found) is not contiguous\n",
WARNING: kfree(NULL) is safe this check is probably not required
#311: FILE: block/partitions/aix.c:287:
+ if (n)
+ kfree(n);
total: 5 errors, 9 warnings, 291 lines checked
NOTE: whitespace errors detected, you may wish to use scripts/cleanpatch or
scripts/cleanfile
./patches/partitions-add-aix-lvm-partition-support-files.patch has style problems, please review.
If any of these errors are false positives, please report
them to the maintainer, see CHECKPATCH in MAINTAINERS.
Please run checkpatch prior to sending patches
Cc: Philippe De Muyter <phdm@macqel.be> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Fix a problem in the discovering of small (1 pp) partitions in presence of
discontiguous partitions.
Signed-off-by: Philippe De Muyter <phdm@macqel.be> Cc: Karel Zak <kzak@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
AIX LVM permits to make "logical volumes" which are made of multiple
slices of multiple disks. The new code allows only access to the "logical
volumes" which are made of one slice on the probed disk, a slice being a
contiguous disk area. The code also detects "logical volumes" made of
multiple slices on the probed disk, but can not describe them to the
partition layer, because the partition layer generic code does not support
that. When such non-contiguous "logical volumes" are detected, a
diagnostic message is printed.
Signed-off-by: Philippe De Muyter <phdm@macqel.be> Cc: Karel Zak <kzak@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
partitions/msdos.c: end-of-line whitespace and semicolon cleanup
Signed-off-by: Philippe De Muyter <phdm@macqel.be> Cc: Karel Zak <kzak@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Dan Carpenter [Thu, 27 Jun 2013 23:54:07 +0000 (09:54 +1000)]
mwave: fix info leak in mwave_ioctl()
Smatch complains that on 64 bit systems, there is a hole in the
MW_ABILITIES struct between ->component_count and ->component_list[]. It
leaks stack information from the mwave_ioctl() function.
I've added a memset() to initialize the struct to zero.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Cc: Greg KH <greg@kroah.com> Cc: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Manfred Spraul [Thu, 27 Jun 2013 23:54:07 +0000 (09:54 +1000)]
ipc/sem.c: replace shared sem_otime with per-semaphore value
sem_otime contains the time of the last semaphore operation that completed
successfully. Every operation updates this value, thus access from
multiple cpus can cause thrashing.
Therefore the patch replaces the variable with a per-semaphore variable.
The per-array sem_otime is only calculated when required.
No performance improvement on a single-socket i3 - only important
for larger systems.
Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Rik van Riel <riel@redhat.com> Cc: Davidlohr Bueso <davidlohr.bueso@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Manfred Spraul [Thu, 27 Jun 2013 23:54:06 +0000 (09:54 +1000)]
ipc/sem.c: always use only one queue for alter operations
There are two places that can contain alter operations:
- the global queue: sma->pending_alter
- the per-semaphore queues: sma->sem_base[].pending_alter.
Since one of the queues must be processed first, this causes an odd
priorization of the wakeups:
Right now, complex operations have priority over simple ops.
The patch restores the behavior of linux <=3.0.9: The longest
waiting operation has the highest priority.
This is done by using only one queue:
- if there are complex ops, then sma->pending_alter is used.
- otherwise, the per-semaphore queues are used.
As a side effect, do_smart_update_queue() becomes much simpler:
No more goto logic.
Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Rik van Riel <riel@redhat.com> Cc: Davidlohr Bueso <davidlohr.bueso@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Manfred Spraul [Thu, 27 Jun 2013 23:54:06 +0000 (09:54 +1000)]
ipc/sem: separate wait-for-zero and alter tasks into seperate queues
Introduce separate queues for operations that do not modify the semaphore
values. Advantages:
- Simpler logic in check_restart().
- Faster update_queue(): Right now, all wait-for-zero operations
are always tested, even if the semaphore value is not 0.
- wait-for-zero gets again priority, as in linux <=3.0.9
Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Rik van Riel <riel@redhat.com> Cc: Davidlohr Bueso <davidlohr.bueso@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
i3, with 2 cores and with hyperthreading enabled. Interleave 2 in order
use first the full cores. HT partially hides the delay from cacheline
trashing, thus the improvement is "only" 8.7% if 4 threads are running.
Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Rik van Riel <riel@redhat.com> Cc: Davidlohr Bueso <davidlohr.bueso@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Enforce that ipc_rcu_alloc returns a cacheline aligned pointer on SMP.
Rationale:
The SysV sem code tries to move the main spinlock into a seperate cacheline
(____cacheline_aligned_in_smp). This works only if ipc_rcu_alloc returns
cacheline aligned pointers.
vmalloc and kmalloc return cacheline algined pointers, the implementation
of ipc_rcu_alloc breaks that.
Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Rik van Riel <riel@redhat.com> Cc: Davidlohr Bueso <davidlohr.bueso@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Davidlohr Bueso [Thu, 27 Jun 2013 23:54:03 +0000 (09:54 +1000)]
ipc,msg: make msgctl_nolock lockless
While the INFO cmd doesn't take the ipc lock, the STAT commands do acquire
it unnecessarily. We can do the permissions and security checks only
holding the rcu lock.
This function now mimics semctl_nolock().
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Davidlohr Bueso [Thu, 27 Jun 2013 23:54:03 +0000 (09:54 +1000)]
ipc,msg: introduce lockless functions to obtain the ipc object
Add msq_obtain_object() and msq_obtain_object_check(), which will allow us
to get the ipc object without acquiring the lock. Just as with
semaphores, these functions are basically wrappers around
ipc_obtain_object*().
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Davidlohr Bueso [Thu, 27 Jun 2013 23:54:02 +0000 (09:54 +1000)]
ipc,msg: introduce msgctl_nolock
Similar to semctl, when calling msgctl, the *_INFO and *_STAT commands can
be performed without acquiring the ipc object.
Add a msgctl_nolock() function and move the logic of *_INFO and *_STAT out
of msgctl(). This change still takes the lock and it will be properly
lockless in the next patch
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Davidlohr Bueso [Thu, 27 Jun 2013 23:54:02 +0000 (09:54 +1000)]
ipc: move locking out of ipcctl_pre_down_nolock
This function currently acquires both the rw_mutex and the rcu lock on
successful lookups, leaving the callers to explicitly unlock them,
creating another two level locking situation.
Make the callers (including those that still use ipcctl_pre_down())
explicitly lock and unlock the rwsem and rcu lock.
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
The issue was caused because we were allocating memory in GFP_KERNEL
context after calling rcu_read_lock. This patch restores the
rcu_read_lock call into ipc_addid() and thus maintains the original
behavior.
Davidlohr Bueso [Thu, 27 Jun 2013 23:54:01 +0000 (09:54 +1000)]
ipc: move rcu lock out of ipc_addid
This patchset continues the work that began in the sysv ipc semaphore
scaling series: https://lkml.org/lkml/2013/3/20/546
Just like semaphores used to be, sysv shared memory and msg queues also
abuse the ipc lock, unnecessarily holding it for operations such as
permission and security checks. This patchset mostly deals with mqueues,
and while shared mem can be done in a very similar way, I want to get
these patches out in the open first. It also does some pending cleanups,
mostly focused on the two level locking we have in ipc code, taking care
of ipc_addid() and ipcctl_pre_down_nolock() - yes there are still
functions that need to be updated as well.
This patch:
Make all callers explicitly take and release the RCU read lock.
This addresses the two level locking seen in newary(), newseg() and
newqueue(). For the last two, explicitly unlock the ipc object and the
rcu lock, instead of calling the custom shm_unlock and msg_unlock
functions. The next patch will deal with the open coded locking for
->perm.lock
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Jean Delvare [Thu, 27 Jun 2013 23:54:00 +0000 (09:54 +1000)]
idr: print a stack dump after ida_remove warning
We print a dump stack after idr_remove warning. This is useful to find
the faulty piece of code. Let's do the same for ida_remove, as it would
be equally useful there.
Signed-off-by: Jean Delvare <jdelvare@suse.de> Cc: Tejun Heo <tj@kernel.org> Cc: Takashi Iwai <tiwai@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Zhang Yanfei [Thu, 27 Jun 2013 23:53:59 +0000 (09:53 +1000)]
s390: remove setting for saved_max_pfn
The only user of saved_max_pfn in s390 is read_oldmem interface but we
have removed that interface, so saved_max_pfn is now unneeded in s390, and
we needn't set it anymore.
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Dave Hansen <dave@sr71.net> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Matt Fleming <matt.fleming@intel.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>