Corey Minyard [Thu, 7 Feb 2013 01:31:53 +0000 (12:31 +1100)]
ipmi: add new kernel options to prevent automatic ipmi init
The configuration change building ipmi_si into the kernel precludes the
use of a custom driver that can utilize more than one KCS interface,
multiple IPMBs, and more than one BMC. This capability is important for
fault-tolerant systems.
Even if the kernel option ipmi_si.trydefaults=0 is specified, ipmi_si
discovers and claims one of the KCS interfaces on a Stratus server. The
inability to now prevent the kernel from managing this device is a
regression from previous kernels. The regression breaks a capability
fault-tolerant vendors have relied upon.
To support both ACPI opregion access and the need to avoid activation of
ipmi_si on some platforms, we've added two new kernel options,
ipmi_si.tryacpi and ipmi_si.trydmi be added to prevent ipmi_si from
initializing when these options are set to 0 on the kernel command line.
With these options at the default value of 1, ipmi_si init proceeds
according to the kernel default.
Tested-by: Jim Paradis <jparadis@redhat.com> Signed-off-by: Robert Evans <Robert.Evans@stratus.com> Signed-off-by: Jim Paradis <jparadis@redhat.com> Signed-off-by: Tony Camuso <tcamuso@redhat.com> Signed-off-by: Corey Minyard <cminyard@mvista.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Given the obvious distinction between kernel and userspace supported
by uapi/, it seems unnecessary to comment on that.
Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca> Signed-off-by: Corey Minyard <cminyard@mvista.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Manfred Spraul [Thu, 7 Feb 2013 01:31:52 +0000 (12:31 +1100)]
ipc/sem.c: alternatives to preempt_disable()
ipc/sem.c uses a custom wakeup scheme that relies on preempt_disable().
On -RT, this causes increased latencies and debug warnings.
The patch adds two additional schemes:
- one built around a completion - could be better for -RT kernels
- one built around a spinlock - unfortunately it's broken
- and the current one
My preferred solution would be the spinlock implementation: RT would use
premptible spinlocks, mainline normal spinlocks. Thus both get the
optimal implementation without any special code in ipc/sem.c.
Unfortunately, I don't see how it could be fixed.
Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mike Galbraith <efault@gmx.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:31:51 +0000 (12:31 +1100)]
sctp: convert to idr_alloc()
Convert to the much saner new idr interface.
Only compile tested.
v2: Don't preload if @gfp doesn't contain __GFP_WAIT as the function
may be being called from non-process ocntext. Also, add a comment
explaining @idr_low never becomes zero.
Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Neil Horman <nhorman@tuxdriver.com> Acked-by: Vlad Yasevich <vyasevich@gmail.com> Cc: Sridhar Samudrala <sri@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:31:50 +0000 (12:31 +1100)]
ipc: convert to idr_alloc()
Convert to the much saner new idr interface.
The new interface doesn't directly translate to the way idr_pre_get()
was used around ipc_addid() as preloading disables preemption. From
my cursory reading, it seems like we should be able to do all
allocation from ipc_addid(), so I moved it there. Can you please
check whether this would be okay? If this is wrong and ipc_addid()
should be allowed to be called from non-sleepable context, I'd suggest
allocating id itself in the outer functions and later install the
pointer using idr_replace().
Only compile tested.
Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Stanislav Kinsbursky <skinsbursky@parallels.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: James Morris <jmorris@namei.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:31:49 +0000 (12:31 +1100)]
inotify: convert to idr_alloc()
Convert to the much saner new idr interface.
Note that the adhoc cyclic id allocation is buggy. If wraparound
happens, the previous code with idr_get_new_above() may segfault and
the converted code will trigger WARN and return -EINVAL. Even if it's
fixed to wrap to zero, the code will be prone to unnecessary -ENOSPC
failures after the first wraparound. We probably need to implement
proper cyclic support in idr.
Only compile tested.
Signed-off-by: Tejun Heo <tj@kernel.org> Cc: John McCutchan <john@johnmccutchan.com> Cc: Robert Love <rlove@rlove.org> Cc: Eric Paris <eparis@parisplace.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:31:49 +0000 (12:31 +1100)]
dlm: convert to idr_alloc()
Convert to the much saner new idr interface. Error return values from
recover_idr_add() mix -1 and -errno. The conversion doesn't change
that but it looks iffy.
Only compile tested.
Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:31:47 +0000 (12:31 +1100)]
target/iscsi: convert to idr_alloc()
Convert to the much saner new idr interface.
Only compile tested.
Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Nicholas A. Bellinger <nab@linux-iscsi.org> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:31:44 +0000 (12:31 +1100)]
macvtap: convert to idr_alloc()
Convert to the much saner new idr interface.
Only compile tested.
Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Jason Wang <jasowang@redhat.com> Cc: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:31:41 +0000 (12:31 +1100)]
IB/mlx4: convert to idr_alloc()
Convert to the much saner new idr interface.
Only compile tested.
Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Jack Morgenstein <jackm@dev.mellanox.co.il> Cc: Or Gerlitz <ogerlitz@mellanox.com> Cc: Roland Dreier <roland@purestorage.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:31:39 +0000 (12:31 +1100)]
IB/core: convert to idr_alloc()
Convert to the much saner new idr interface.
Only compile tested.
v2: Mike triggered WARN_ON() in idr_preload() because send_mad(),
which may be used from non-process context, was calling
idr_preload() unconditionally. Preload iff @gfp_mask has
__GFP_WAIT.
Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Sean Hefty <sean.hefty@intel.com> Reported-by: "Marciniszyn, Mike" <mike.marciniszyn@intel.com> Cc: Roland Dreier <roland@kernel.org> Cc: Sean Hefty <sean.hefty@intel.com> Cc: Hal Rosenstock <hal.rosenstock@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:31:36 +0000 (12:31 +1100)]
firewire: convert to idr_alloc()
Convert to the much saner new idr interface.
Only compile tested.
v2: Stefan pointed out that add_client_resource() may be called from
non-process context. Preload iff @gfp_mask contains __GFP_WAIT.
Also updated to include minor upper limit check.
Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Stefan Richter <stefanr@s5r6.in-berlin.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:31:35 +0000 (12:31 +1100)]
dca: convert to idr_alloc()
Convert to the much saner new idr interface.
Only compile tested.
Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Maciej Sosnowski <maciej.sosnowski@intel.com> Cc: Shannon Nelson <shannon.nelson@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:31:35 +0000 (12:31 +1100)]
atm/nicstar: convert to idr_alloc()
Convert to the much saner new idr interface. The existing code looks
buggy to me - ID 0 is treated as no-ID but allocation specifies 0 as
lower limit and there's no error handling after partial success. This
conversion keeps the bugs unchanged.
Only compile tested.
v2: id1 and id2 are now directly used for -errno return and thus
should be signed. Make them int instead of u32. This was spotted
by kbuild test robot.
v3: Further simplify as suggested by Chas Williams.
Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Chas Williams <chas@cmf.nrl.navy.mil> Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:31:34 +0000 (12:31 +1100)]
block: fix synchronization and limit check in blk_alloc_devt()
idr allocation in blk_alloc_devt() wasn't synchronized against lookup
and removal, and its limit check was off by one - 1 << MINORBITS is
the number of minors allowed, not the maximum allowed minor.
Add locking and rename MAX_EXT_DEVT to NR_EXT_DEVT and fix limit
checking.
Tejun Heo [Thu, 7 Feb 2013 01:31:33 +0000 (12:31 +1100)]
idr: implement idr_preload[_end]() and idr_alloc()
The current idr interface is very cumbersome.
* For all allocations, two function calls - idr_pre_get() and
idr_get_new*() - should be made.
* idr_pre_get() doesn't guarantee that the following idr_get_new*()
will not fail from memory shortage. If idr_get_new*() returns
-EAGAIN, the caller is expected to retry pre_get and allocation.
* idr_get_new*() can't enforce upper limit. Upper limit can only be
enforced by allocating and then freeing if above limit.
* idr_layer buffer is unnecessarily per-idr. Each idr ends up keeping
around MAX_IDR_FREE idr_layers. The memory consumed per idr is
under two pages but it makes it difficult to make idr_layer larger.
This patch implements the following new set of allocation functions.
* idr_preload[_end]() - Similar to radix preload but doesn't fail.
The first idr_alloc() inside preload section can be treated as if it
were called with @gfp_mask used for idr_preload().
* idr_alloc() - Allocate an ID w/ lower and upper limits. Takes
@gfp_flags and can be used w/o preloading. When used inside
preloaded section, the allocation mask of preloading can be assumed.
If idr_alloc() can be called from a context which allows sufficiently
relaxed @gfp_mask, it can be used by itself. If, for example,
idr_alloc() is called inside spinlock protected region, preloading can
be used like the following.
idr_preload(GFP_KERNEL);
spin_lock(lock);
id = idr_alloc(idr, ptr, start, end, GFP_NOWAIT);
spin_unlock(lock);
idr_preload_end();
if (id < 0)
error;
which is much simpler and less error-prone than idr_pre_get and
idr_get_new*() loop.
The new interface uses per-pcu idr_layer buffer and thus the number of
idr's in the system doesn't affect the amount of memory used for
preloading.
idr_layer_alloc() is introduced to handle idr_layer allocations for
both old and new ID allocation paths. This is a bit hairy now but the
new interface is expected to replace the old and the internal
implementation eventually will become simpler.
v2: Improve idr_preload() comment a bit so that it's clear that
preemption is disabled while preloaded. Make idr_layer_alloc()
try kmem_cache first and then fall back to per-cpu buffer;
otherwise, non-preloading idr_alloc()s empty per-cpu buffers
makeing preloaders fill them, which, while not incorrect, still is
a bit unfair.
Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:31:33 +0000 (12:31 +1100)]
idr: refactor idr_get_new_above()
Move slot filling to idr_fill_slot() from idr_get_new_above_int() and
make idr_get_new_above() directly call it. idr_get_new_above_int() is
no longer needed and removed.
This will be used to implement a new ID allocation interface.
Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:31:33 +0000 (12:31 +1100)]
idr: remove _idr_rc_to_errno() hack
idr uses -1, IDR_NEED_TO_GROW and IDR_NOMORE_SPACE to communicate
exception conditions internally. The return value is later translated
to errno values using _idr_rc_to_errno().
This is confusing. Drop the custom ones and consistently use -EAGAIN
for "tree needs to grow", -ENOMEM for "need more memory" and -ENOSPC
for "ran out of ID space".
Due to the weird memory preloading mechanism, [ra]_get_new*() return
-EAGAIN on memory shortage, so we need to substitute -ENOMEM w/
-EAGAIN on those interface functions. They'll eventually be cleaned
up and the translations will go away.
This patch doesn't introduce any functional changes.
Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:31:32 +0000 (12:31 +1100)]
idr: deprecate idr_remove_all()
There was only one legitimate use of idr_remove_all() and a lot more
of incorrect uses (or lack of it). Now that idr_destroy() implies
idr_remove_all() and all the in-kernel users updated not to use it,
there's no reason to keep it around. Mark it deprecated so that we
can later unexport it.
idr_remove_all() is made an inline function calling __idr_remove_all()
to avoid triggering deprecated warning on EXPORT_SYMBOL().
Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:28:02 +0000 (12:28 +1100)]
inotify: don't use idr_remove_all()
idr_destroy() can destroy idr by itself and idr_remove_all() is being
deprecated. Drop its usage.
Signed-off-by: Tejun Heo <tj@kernel.org> Cc: John McCutchan <john@johnmccutchan.com> Cc: Robert Love <rlove@rlove.org> Cc: Eric Paris <eparis@parisplace.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:28:02 +0000 (12:28 +1100)]
nfs: idr_destroy() no longer needs idr_remove_all()
idr_destroy() can destroy idr by itself and idr_remove_all() is being
deprecated. Drop reference to idr_remove_all(). Note that the code
wasn't completely correct before because idr_remove() on all entries
doesn't necessarily release all idr_layers which could lead to memory
leak.
Signed-off-by: Tejun Heo <tj@kernel.org> Cc: "J. Bruce Fields" <bfields@fieldses.org> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:28:02 +0000 (12:28 +1100)]
dlm: don't use idr_remove_all()
idr_destroy() can destroy idr by itself and idr_remove_all() is being
deprecated.
The conversion isn't completely trivial for recover_idr_clear() as
it's the only place in kernel which makes legitimate use of
idr_remove_all() w/o idr_destroy(). Replace it with idr_remove() call
inside idr_for_each_entry() loop. It goes on top so that it matches
the operation order in recover_idr_del().
Only compile tested.
Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Christine Caulfield <ccaulfie@redhat.com> Cc: David Teigland <teigland@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:28:01 +0000 (12:28 +1100)]
dlm: use idr_for_each_entry() in recover_idr_clear() error path
Convert recover_idr_clear() to use idr_for_each_entry() instead of
idr_for_each(). It's somewhat less efficient this way but it
shouldn't matter in an error path. This is to help with deprecation
of idr_remove_all().
Only compile tested.
Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Christine Caulfield <ccaulfie@redhat.com> Cc: David Teigland <teigland@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:28:00 +0000 (12:28 +1100)]
drm: don't use idr_remove_all()
idr_destroy() can destroy idr by itself and idr_remove_all() is being
deprecated. Drop its usage.
* drm_ctxbitmap_cleanup() was calling idr_remove_all() but forgetting
idr_destroy() thus leaking all buffered free idr_layers. Replace it
with idr_destroy().
Signed-off-by: Tejun Heo <tj@kernel.org> Cc: David Airlie <airlied@linux.ie> Cc: Inki Dae <inki.dae@samsung.com> Cc: Joonyoung Shim <jy0922.shim@samsung.com> Cc: Seung-Woo Kim <sw0312.kim@samsung.com> Cc: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:27:59 +0000 (12:27 +1100)]
atm/nicstar: don't use idr_remove_all()
idr_destroy() can destroy idr by itself and idr_remove_all() is being
deprecated. Drop its usage.
Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Chas Williams <chas@cmf.nrl.navy.mil> Cc: David Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:27:59 +0000 (12:27 +1100)]
idr: make idr_destroy() imply idr_remove_all()
idr is silly in quite a few ways, one of which is how it's supposed to
be destroyed - idr_destroy() doesn't release IDs and doesn't even
whine if the idr isn't empty. If the caller forgets idr_remove_all(),
it simply leaks memory.
Even ida gets this wrong and leaks memory on destruction. There is
absoltely no reason not to call idr_remove_all() from idr_destroy().
Nobody is abusing idr_destroy() for shrinking free layer buffer and
continues to use idr after idr_destroy(), so it's safe to do
remove_all from destroy.
In the whole kernel, there is only one place where idr_remove_all() is
legitimiately used without following idr_destroy() while there are
quite a few places where the caller forgets either idr_remove_all() or
idr_destroy() leaking memory.
This patch makes idr_destroy() call idr_destroy_all() and updates the
function description accordingly.
Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tejun Heo [Thu, 7 Feb 2013 01:27:58 +0000 (12:27 +1100)]
idr: fix a subtle bug in idr_get_next()
The iteration logic of idr_get_next() is borrowed mostly verbatim from
idr_for_each(). It walks down the tree looking for the slot matching
the current ID. If the matching slot is not found, the ID is
incremented by the distance of single slot at the given level and
repeats.
The implementation assumes that during the whole iteration id is
aligned to the layer boundaries of the level closest to the leaf,
which is true for all iterations starting from zero or an existing
element and thus is fine for idr_for_each().
However, idr_get_next() may be given any point and if the starting id
hits in the middle of a non-existent layer, increment to the next
layer will end up skipping the same offset into it. For example, an
IDR with IDs filled between [64, 127] would look like the following.
[ 0 64 ... ]
/----/ |
| |
NULL [ 64 ... 127 ]
If idr_get_next() is called with 63 as the starting point, it will try
to follow down the pointer from 0. As it is NULL, it will then try to
proceed to the next slot in the same level by adding the slot distance
at that level which is 64 - making the next try 127. It goes around
the loop and finds and returns 127 skipping [64, 126].
Note that this bug also triggers in idr_for_each_entry() loop which
deletes during iteration as deletions can make layers go away leaving
the iteration with unaligned ID into missing layers.
Fix it by ensuring proceeding to the next slot doesn't carry over the
unaligned offset - ie. use round_up(id + 1, slot_distance) instead of
id += slot_distance.
Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: David Teigland <teigland@redhat.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This most likely happens because dev_t is freed while the number is still
used and idr_get_new() is not protected on every use. The fix adds a
mutex where it wasn't before and moves the dev_t free function so it is
called after device del.
Signed-off-by: Tomas Henzl <thenzl@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Atsushi Kumagai [Thu, 7 Feb 2013 01:27:57 +0000 (12:27 +1100)]
kexec: add the values related to buddy system for filtering free pages.
tAdd adds the values related to buddy system to vmcoreinfo data so that
makedumpfile (dump filtering command) can filter out all free pages with
the new logic.
It's faster than the current logic because it can distinguish free page by
analyzing page structure at the same time as filtering for other
unnecessary pages (e.g. anonymous page).
OTOH, the current logic has to trace free_list to distinguish free pages
while analyzing page structure to filter out other unnecessary pages.
The new logic uses the fact that buddy page is marked by _mapcount ==
PAGE_BUDDY_MAPCOUNT_VALUE. But, _mapcount shares its memory with other
fields for SLAB/SLUB when PG_slab is set, so we need to check if PG_slab
is set or not before looking up _mapcount value. And we can get the order
of buddy system from private field. To sum it up, the values below are
required for this logic.
Changelog from v1 to v2:
1. remove SIZE(pageflags)
The new logic was changed after I sent v1 patch.
Accordingly, SIZE(pageflags) has been unnecessary for makedumpfile.
What's makedumpfile:
makedumpfile creates a small dumpfile by excluding unnecessary pages
for the analysis. To distinguish unnecessary pages, makedumpfile gets
the vmcoreinfo data which has the minimum debugging information only
for dump filtering.
Signed-off-by: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Alan Cox [Thu, 7 Feb 2013 01:27:57 +0000 (12:27 +1100)]
fork: unshare: remove dead code
If new_nsproxy is set we will always call switch_task_namespaces and then
set new_nsproxy back to NULL so the reassignment and fall through check
are redundant
Signed-off-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kees Cook [Thu, 7 Feb 2013 01:27:55 +0000 (12:27 +1100)]
coredump: remove redundant defines for dumpable states
The existing SUID_DUMP_* defines duplicate the newer SUID_DUMPABLE_*
defines introduced in 54b501992dd ("coredump: warn about unsafe
suid_dumpable / core_pattern combo"). Remove the new ones, and use the
prior values instead.
Signed-off-by: Kees Cook <keescook@chromium.org> Reported-by: Chen Gang <gang.chen@asianux.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Alan Cox <alan@linux.intel.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Doug Ledford <dledford@redhat.com> Cc: Serge Hallyn <serge.hallyn@canonical.com> Cc: James Morris <james.l.morris@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Andrey Vagin [Thu, 7 Feb 2013 01:27:54 +0000 (12:27 +1100)]
signalfd: add ability to return siginfo in a raw format
signalfd should be called with the flag SFD_RAW for that.
signalfd_siginfo is not full for siginfo with a negative si_code.
copy_siginfo_to_user() is copied a full siginfo to user-space, if si_code
is negative. signalfd_copyinfo() doesn't do that and can't be expanded,
because it has not compatible format with siginfo_t.
Another problem is that a constant __SI_* is removed from si_code. It's
not a problem for usual applications, because they expect a defined type
of siginfo (internal logic). When we want to dump pending signals, we
can't predict a type of siginfo, so we should get it from kernel.
The main idea of the raw format is that it should be enough for restoring
exactly the same siginfo for the current process.
This functionality is required for checkpointing pending signals.
Signed-off-by: Andrey Vagin <avagin@openvz.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: David Howells <dhowells@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Pavel Emelyanov <xemul@parallels.com> Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Reviewed-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Andrey Vagin [Thu, 7 Feb 2013 01:27:54 +0000 (12:27 +1100)]
signal: allow to send any siginfo to itself
The idea is simple. We need to get the siginfo for each signal on
checkpointing dump, and then return it back on restore.
The first problem is that the kernel doesn't report complete siginfos to
userspace. In a signal handler the kernel strips SI_CODE from siginfo.
When a siginfo is received from signalfd, it has a different format with
fixed sizes of fields. The interface of signalfd was extended. If a
signalfd is created with the flag SFD_RAW, it returns siginfo in a raw
format.
rt_sigqueueinfo looks suitable for restoring signals, but it can't
send siginfo with a positive si_code, because these codes are reserved for
the kernel. In the real world each person has right to do anything with
himself, so I think a process should able to send any siginfo to itself.
This patch:
The kernel prevents sending of siginfo with positive si_code, because
these codes are reserved for kernel. I think we can allow a task to send
such a siginfo to itself. This operation should not be dangerous.
This functionality is required for restoring signals in
checkpoint/restart.
Signed-off-by: Andrey Vagin <avagin@openvz.org> Cc: Serge Hallyn <serge.hallyn@canonical.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Pavel Emelyanov <xemul@parallels.com> Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Reviewed-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Namjae Jeon [Thu, 7 Feb 2013 01:27:53 +0000 (12:27 +1100)]
fat: eliminate iterations in fat_search_long and __fat_readdir in case of EOD
When doing lookups via fat_search_long(), we can stop checking for further
entries if we detect End of Directory, i.e. if (de->name[0] == 0x00).The
current code traverses the cluster chain of a directory until a hit is
found or till the last cluster for that directory, ignoring the EOD mark.
Fix this.
Likewise,when readdir(3) is called, we can stop checking for further
entries in __fat_readdir() when we hit EOD.
Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Ravishankar N <ravi.n1@samsung.com> Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>