Arthur Othieno [Sat, 10 Sep 2005 07:26:22 +0000 (00:26 -0700)]
[PATCH] Remove even more stale references to Documentation/smp.tex
Randy cleaned out the bulk of these stale references to the now long gone
Documentation/smp.tex back in 2004. I followed this up with a few more
sweeps. Somehow, these have managed to sneak back in since.
I can't seem to figure out a contact point for M32R (no one listed in
MAINTAINERS!), but, these patches are only but trivial.
Signed-off-by: Arthur Othieno <a.othieno@bluewin.ch> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] sched: don't kick ALB in the presence of pinned task
Jack Steiner brought this issue at my OLS talk.
Take a scenario where two tasks are pinned to two HT threads in a physical
package. Idle packages in the system will keep kicking migration_thread on
the busy package with out any success.
We will run into similar scenarios in the presence of CMP/NUMA.
Nick Piggin [Sat, 10 Sep 2005 07:26:19 +0000 (00:26 -0700)]
[PATCH] sched: HT optimisation
If an idle sibling of an HT queue encounters a busy sibling, then make
higher level load balancing of the non-idle variety.
Performance of multiprocessor HT systems with low numbers of tasks
(generally < number of virtual CPUs) can be significantly worse than the
exact same workloads when running in non-HT mode. The reason is largely
due to poor scheduling behaviour.
This patch improves the situation, making the performance gap far less
significant on one problematic test case (tbench).
Signed-off-by: Nick Piggin <npiggin@suse.de> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Nick Piggin [Sat, 10 Sep 2005 07:26:18 +0000 (00:26 -0700)]
[PATCH] sched: less locking
During periodic load balancing, don't hold this runqueue's lock while
scanning remote runqueues, which can take a non trivial amount of time
especially on very large systems.
Holding the runqueue lock will only help to stabilise ->nr_running, however
this doesn't do much to help because tasks being woken will simply get held
up on the runqueue lock, so ->nr_running would not provide a really
accurate picture of runqueue load in that case anyway.
What's more, ->nr_running (and possibly the cpu_load averages) of remote
runqueues won't be stable anyway, so load balancing is always an inexact
operation.
Signed-off-by: Nick Piggin <npiggin@suse.de> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Nick Piggin [Sat, 10 Sep 2005 07:26:16 +0000 (00:26 -0700)]
[PATCH] sched: less newidle locking
Similarly to the earlier change in load_balance, only lock the runqueue in
load_balance_newidle if the busiest queue found has a nr_running > 1. This
will reduce frequency of expensive remote runqueue lock aquisitions in the
schedule() path on some workloads.
Signed-off-by: Nick Piggin <npiggin@suse.de> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
William Weston reported unusually high scheduling latencies on his x86 HT
box, on the -RT kernel. I managed to reproduce it on my HT box and the
latency tracer shows the incident in action:
the reason for this anomaly is the following code in dependent_sleeper():
/*
* If a user task with lower static priority than the
* running task on the SMT sibling is trying to schedule,
* delay it till there is proportionately less timeslice
* left of the sibling task to prevent a lower priority
* task from using an unfair proportion of the
* physical cpu's resources. -ck
*/
[...]
if (((smt_curr->time_slice * (100 - sd->per_cpu_gain) /
100) > task_timeslice(p)))
ret = 1;
Note that in contrast to the comment above, we dont actually do the check
based on static priority, we do the check based on timeslices. But
timeslices go up and down, and even highprio tasks can randomly have very
low timeslices (just before their next refill) and can thus be judged as
'lowprio' by the above piece of code. This condition is clearly buggy.
The correct test is to check for static_prio _and_ to check for the
preemption priority. Even on different static priority levels, a
higher-prio interactive task should not be delayed due to a
higher-static-prio CPU hog.
There is a symmetric bug in the 'kick SMT sibling' code of this function as
well, which can be solved in a similar way.
The patch below (against the current scheduler queue in -mm) fixes both
bugs. I have build and boot-tested this on x86 SMT, and nice +20 tasks
still get properly throttled - so the dependent-sleeper logic is still in
action.
btw., these bugs pessimised the SMT scheduler because the 'delay wakeup'
property was applied too liberally, so this fix is likely a throughput
improvement as well.
I separated out a smt_slice() function to make the code easier to read.
This patch implements a task state bit (TASK_NONINTERACTIVE), which can be
used by blocking points to mark the task's wait as "non-interactive". This
does not mean the task will be considered a CPU-hog - the wait will simply
not have an effect on the waiting task's priority - positive or negative
alike. Right now only pipe_wait() will make use of it, because it's a
common source of not-so-interactive waits (kernel compilation jobs, etc.).
[PATCH] sched: make idlest_group/cpu cpus_allowed-aware
Add relevant checks into find_idlest_group() and find_idlest_cpu() to make
them return only the groups that have allowed CPUs and allowed CPUs
respectively.
Signed-off-by: M.Baris Demiray <baris@labristeknoloji.com> Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Con Kolivas [Sat, 10 Sep 2005 07:26:08 +0000 (00:26 -0700)]
[PATCH] sched: run SCHED_NORMAL tasks with real time tasks on SMT siblings
The hyperthread aware nice handling currently puts to sleep any non real
time task when a real time task is running on its sibling cpu. This can
lead to prolonged starvation by having the non real time task pegged to the
cpu with load balancing not pulling that task away.
Currently we force lower priority hyperthread tasks to run a percentage of
time difference based on timeslice differences which is meaningless when
comparing real time tasks to SCHED_NORMAL tasks. We can allow non real
time tasks to run with real time tasks on the sibling up to per_cpu_gain%
if we use jiffies as a counter.
Cleanups and micro-optimisations to the relevant code section should make
it more understandable as well.
Signed-off-by: Con Kolivas <kernel@kolivas.org> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Paul Jackson [Sat, 10 Sep 2005 07:26:06 +0000 (00:26 -0700)]
[PATCH] cpuset semaphore depth check deadlock fix
The cpusets-formalize-intermediate-gfp_kernel-containment patch
has a deadlock problem.
This patch was part of a set of four patches to make more
extensive use of the cpuset 'mem_exclusive' attribute to
manage kernel GFP_KERNEL memory allocations and to constrain
the out-of-memory (oom) killer.
A task that is changing cpusets in particular ways on a system
when it is very short of free memory could double trip over
the global cpuset_sem semaphore (get the lock and then deadlock
trying to get it again).
The second attempt to get cpuset_sem would be in the routine
cpuset_zone_allowed(). This was discovered by code inspection.
I can not reproduce the problem except with an artifically
hacked kernel and a specialized stress test.
In real life you cannot hit this unless you are manipulating
cpusets, and are very unlikely to hit it unless you are rapidly
modifying cpusets on a memory tight system. Even then it would
be a rare occurence.
If you did hit it, the task double tripping over cpuset_sem
would deadlock in the kernel, and any other task also trying
to manipulate cpusets would deadlock there too, on cpuset_sem.
Your batch manager would be wedged solid (if it was cpuset
savvy), but classic Unix shells and utilities would work well
enough to reboot the system.
The unusual condition that led to this bug is that unlike most
semaphores, cpuset_sem _can_ be acquired while in the page
allocation code, when __alloc_pages() calls cpuset_zone_allowed.
So it easy to mistakenly perform the following sequence:
1) task makes system call to alter a cpuset
2) take cpuset_sem
3) try to allocate memory
4) memory allocator, via cpuset_zone_allowed, trys to take cpuset_sem
5) deadlock
The reason that this is not a serious bug for most users
is that almost all calls to allocate memory don't require
taking cpuset_sem. Only some code paths off the beaten
track require taking cpuset_sem -- which is good. Taking
a global semaphore on the main code path for allocating
memory would not scale well.
This patch fixes this deadlock by wrapping the up() and down()
calls on cpuset_sem in kernel/cpuset.c with code that tracks
the nesting depth of the current task on that semaphore, and
only does the real down() if the task doesn't hold the lock
already, and only does the real up() if the nesting depth
(number of unmatched downs) is exactly one.
The previous required use of refresh_mems(), anytime that
the cpuset_sem semaphore was acquired and the code executed
while holding that semaphore might try to allocate memory, is
no longer required. Two refresh_mems() calls were removed
thanks to this. This is a good change, as failing to get
all the necessary refresh_mems() calls placed was a primary
source of bugs in this cpuset code. The only remaining call
to refresh_mems() is made while doing a memory allocation,
if certain task memory placement data needs to be updated
from its cpuset, due to the cpuset having been changed behind
the tasks back.
Signed-off-by: Paul Jackson <pj@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch (written by me and also containing many suggestions of Arjan van
de Ven) does a major cleanup of the spinlock code. It does the following
things:
- consolidates and enhances the spinlock/rwlock debugging code
- simplifies the asm/spinlock.h files
- encapsulates the raw spinlock type and moves generic spinlock
features (such as ->break_lock) into the generic code.
- cleans up the spinlock code hierarchy to get rid of the spaghetti.
Most notably there's now only a single variant of the debugging code,
located in lib/spinlock_debug.c. (previously we had one SMP debugging
variant per architecture, plus a separate generic one for UP builds)
Also, i've enhanced the rwlock debugging facility, it will now track
write-owners. There is new spinlock-owner/CPU-tracking on SMP builds too.
All locks have lockup detection now, which will work for both soft and hard
spin/rwlock lockups.
The arch-level include files now only contain the minimally necessary
subset of the spinlock code - all the rest that can be generalized now
lives in the generic headers:
/*
* here's the role of the various spinlock/rwlock related include files:
*
* on SMP builds:
*
* asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
* initializers
*
* linux/spinlock_types.h:
* defines the generic type and initializers
*
* asm/spinlock.h: contains the __raw_spin_*()/etc. lowlevel
* implementations, mostly inline assembly code
*
* (also included on UP-debug builds:)
*
* linux/spinlock_api_smp.h:
* contains the prototypes for the _spin_*() APIs.
*
* linux/spinlock.h: builds the final spin_*() APIs.
*
* on UP builds:
*
* linux/spinlock_type_up.h:
* contains the generic, simplified UP spinlock type.
* (which is an empty structure on non-debug builds)
*
* linux/spinlock_types.h:
* defines the generic type and initializers
*
* linux/spinlock_up.h:
* contains the __raw_spin_*()/etc. version of UP
* builds. (which are NOPs on non-debug, non-preempt
* builds)
*
* (included on UP-non-debug builds:)
*
* linux/spinlock_api_up.h:
* builds the _spin_*() APIs.
*
* linux/spinlock.h: builds the final spin_*() APIs.
*/
All SMP and UP architectures are converted by this patch.
arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
crosscompilers. m32r, mips, sh, sparc, have not been tested yet, but should
be mostly fine.
From: Grant Grundler <grundler@parisc-linux.org>
Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
Builds 32-bit SMP kernel (not booted or tested). I did not try to build
non-SMP kernels. That should be trivial to fix up later if necessary.
I converted bit ops atomic_hash lock to raw_spinlock_t. Doing so avoids
some ugly nesting of linux/*.h and asm/*.h files. Those particular locks
are well tested and contained entirely inside arch specific code. I do NOT
expect any new issues to arise with them.
If someone does ever need to use debug/metrics with them, then they will
need to unravel this hairball between spinlocks, atomic ops, and bit ops
that exist only because parisc has exactly one atomic instruction: LDCW
(load and clear word).
From: "Luck, Tony" <tony.luck@intel.com>
ia64 fix
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arjan van de Ven <arjanv@infradead.org> Signed-off-by: Grant Grundler <grundler@parisc-linux.org> Cc: Matthew Wilcox <willy@debian.org> Signed-off-by: Hirokazu Takata <takata@linux-m32r.org> Signed-off-by: Mikael Pettersson <mikpe@csd.uu.se> Signed-off-by: Benoit Boissinot <benoit.boissinot@ens-lyon.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] md: really get sb_size setting right in all cases
There was another case where sb_size wasn't being set, so instead do the
sensible thing and set if when filling in the content of a superblock. That
ensures that whenever we write a superblock, the sb_size MUST be set.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] md: make sure the new 'sb_size' is set properly device added without pre-existing superblock.
There are two ways to add devices to an md/raid array.
It can have superblock written to it, and then given to the md driver,
which will read the superblock (the new way)
or
md can be told (through SET_ARRAY_INFO) the shape of the array, and
the told about individual drives, and md will create the required
superblock (the old way).
The newly introduced sb_size was only set for drives being added the
new way, not the old ways. Oops :-(
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] md: choose better default offset for bitmap.
On reflection, a better default location for hot-adding bitmaps with version-1
superblocks is immediately after the superblock. There might not be much room
there, but there is usually atleast 3k, and that is a good start.
Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] md: tidy up daemon stop/start code in md/bitmap.c
The bitmap code used to have two daemons, so there is some 'common' start/stop
code. But now there is only one, so the common code is just noise.
This patch tidies this up somewhat.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
mddev->bitmap gets clearred before the writeback daemon is stopped. So the
write_back daemon needs to be careful not to dereference the 'bitmap' if it is
NULL.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Switch MD to use the kthread infrastructure, to simplify the code and get rid
of tasklist_lock abuse in md_unregister_thread.
Also don't flush signals in md_thread, as the called thread will always do
that.
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] md: add write-intent-bitmap support to raid5
Most awkward part of this is delaying write requests until bitmap updates have
been flushed.
To achieve this, we have a sequence number (seq_flush) which is incremented
each time the raid5 is unplugged.
If the raid thread notices that this has changed, it flushes bitmap changes,
and assigned the value of seq_flush to seq_write.
When a write request arrives, it is given the number from seq_write, and that
write request may not complete until seq_flush is larger than the saved seq
number.
We have a new queue for storing stripes which are waiting for a bitmap flush
and an extra flag for stripes to record if the write was 'degraded' and so
should not clear the a bit in the bitmap.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] md: limit size of sb read/written to appropriate amount
version-1 superblocks are not (normally) 4K long, and can be of variable size.
Writing the full 4K can cause corruption (but only in non-default
configurations).
With this patch the super-block-flavour can choose a size to read, and set a
size to write based on what it finds.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] md: fix bitmap/read_sb_page so that it handles errors properly.
read_sb_page() assumed that if sync_page_io fails, the device would be marked
faultly. However it isn't. So in the face of error, read_sb_page would loop
forever.
Redo the logic so that this cannot happen.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] md: allow hot-adding devices to arrays with non-persistant superblocks.
It is possibly (and occasionally useful) to have a raid1 without persistent
superblocks. The code in add_new_disk for adding a device to such an array
always tries to read a superblock.
This will obviously fail.
So do the appropriate test and call md_import_device with
appropriate args.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] md: support md/linear array with components greater than 2 terabytes.
linear currently uses division by the size of the smallest componenet device
to find which device a request goes to. If that smallest device is larger
than 2 terabytes, then the division will not work on some systems.
So we introduce a pre-shift, and take care not to make the hash table too
large, much like the code in raid0.
Also get rid of conf->nr_zones, which is not needed.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
If a device is flagged 'WriteMostly' and the array has a bitmap, and the
bitmap superblock indicates that write_behind is allowed, then write_behind is
enabled for WriteMostly devices.
Write requests will be acknowledges as complete to the caller (via b_end_io)
when all non-WriteMostly devices have completed the write, but will not be
cleared from the bitmap until all devices complete.
This requires memory allocation to make a local copy of the data being
written. If there is insufficient memory, then we fall-back on normal write
semantics.
Signed-Off-By: Paul Clements <paul.clements@steeleye.com> Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] md: all hot-add and hot-remove of md intent logging bitmaps
Both file-bitmaps and superblock bitmaps are supported.
If you add a bitmap file on the array device, you lose.
This introduces a 'default_bitmap_offset' field in mddev, as the ioctl used
for adding a superblock bitmap doesn't have room for giving an offset. Later,
this value will be setable via sysfs.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] md: improve handling of bitmap initialisation.
When we find a 'stale' bitmap, possibly because it is new, we should just
assume every bit needs to be set, but rather base the setting of bits on the
current state of the array (degraded and recovery_cp).
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] dm: fix rh_dec()/rh_inc() race in dm-raid1.c
Fix another bug in dm-raid1.c that the dirty region may stay in or be moved
to clean list and freed while in use.
It happens as follows:
CPU0 CPU1
------------------------------------------------------------------------------
rh_dec()
if (atomic_dec_and_test(pending))
<the region is still marked dirty>
rh_inc()
if the region is clean
mark the region dirty
and remove from clean list
mark the region clean
and move to clean list
atomic_inc(pending)
At this stage, the region is in clean list and will be mistakenly reclaimed
by rh_update_states() later.
[PATCH] md: fix minor error in raid10 read-balancing calculation.
'this_sector' is a virtual (array) address while 'head_position' is a physical
(device) address, so substraction doesn't make any sense. devs[slot].addr
should be used instead of this_sector.
However, this patch doesn't make much practical different to the read
balancing due to the effects of later code.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Don't just irritate all other kernel developers. Fix the users first,
then you can re-introduce the must-check infrastructure to avoid new
cases creeping in.
Daniel Ritz [Thu, 8 Sep 2005 22:57:14 +0000 (00:57 +0200)]
[PATCH] Update PCI IOMEM allocation start
This fixes the problem with "Averatec 6240 pcmcia_socket0: unable to
apply power", which was due to the CardBus IOMEM register region being
allocated at an address that was actually inside the RAM window that had
been reserved for video frame-buffers in an UMA setup.
The BIOS _should_ have marked that region reserved in the e820 memory
descriptor tables, but did not.
It is fixed by rounding up the default starting address of PCI memory
allocations, so that we leave a bigger gap after the final known memory
location. The amount of rounding depends on how big the unused memory
gap is that we can allocate IOMEM from.
Clean up timer initialization by introducing DEFINE_TIMER a'la
DEFINE_SPINLOCK. Build and boot-tested on x86. A similar patch has been
been in the -RT tree for some time.
[PATCH] FUSE: don't allow restarting of system calls
This patch removes ability to interrupt and restart operations while there
hasn't been any side-effect.
The reason: applications. There are some apps it seems that generate
signals at a fast rate. This means, that if the operation cannot make
enough progress between two signals, it will be restarted for ever. This
bug actually manifested itself with 'krusader' trying to open a file for
writing under sshfs. Thanks to Eduard Czimbalmos for the report.
The problem can be solved just by making open() uninterruptible, because in
this case it was the truncate operation that slowed down the progress. But
it's better to solve this by simply not allowing interrupts at all (except
SIGKILL), because applications don't expect file operations to be
interruptible anyway. As an added bonus the code is simplified somewhat.
Don't change mtime/ctime/atime to local time on read/write. Rather invalidate
file attributes, so next stat() will force a GETATTR call. Bug reported by
Ben Grimm.
Make data caching behavior selectable on a per-open basis instead of
per-mount. Compatibility for the old mount options 'kernel_cache' and
'direct_io' is retained in the userspace library (version 2.4.0-pre1 or
later).
[PATCH] fuse: transfer readdir data through device
This patch removes a long lasting "hack" in FUSE, which used a separate
channel (a file descriptor refering to a disk-file) to transfer directory
contents from userspace to the kernel.
The patch adds three new operations (OPENDIR, READDIR, RELEASEDIR), which
have semantics and implementation exactly maching the respective file
operations (OPEN, READ, RELEASE).
This simplifies the directory reading code. Also disk space is not
necessary, which can be important in embedded systems.
This patch adds support for the "direct_io" mount option of FUSE.
When this mount option is specified, the page cache is bypassed for
read and write operations. This is useful for example, if the
filesystem doesn't know the size of files before reading them, or when
any kind of caching is harmful.
[PATCH] FUSE: tighten check for processes allowed access
This patch tightens the check for allowing processes to access non-privileged
mounts. The rational is that the filesystem implementation can control the
behavior or get otherwise unavailable information of the filesystem user. If
the filesystem user process has the same uid, gid, and is not suid or sgid
application, then access is safe. Otherwise access is not allowed unless the
"allow_other" mount option is given (for which policy is controlled by the
userspace mount utility).
Thanks to everyone linux-fsdevel, especially Martin Mares who helped uncover
problems with the previous approach.
With the help of the readpages() operation multiple reads are bundled
together and sent as a single request to userspace. This can improve
reading performace.
This patch adds miscellaneous mount options to the FUSE filesystem.
The following mount options are added:
o default_permissions: check permissions with generic_permission()
o allow_other: allow other users to access files
o allow_root: allow root to access files
o kernel_cache: don't invalidate page cache on open
The most important difference between orinary filesystems and FUSE is
the fact, that the filesystem data/metadata is provided by a userspace
process run with the privileges of the mount "owner" instead of the
kernel, or some remote entity usually running with elevated
privileges.
The security implication of this is that a non-privileged user must
not be able to use this capability to compromise the system. Obvious
requirements arising from this are:
- mount owner should not be able to get elevated privileges with the
help of the mounted filesystem
- mount owner should not be able to induce undesired behavior in
other users' or the super user's processes
- mount owner should not get illegitimate access to information from
other users' and the super user's processes
These are currently ensured with the following constraints:
1) mount is only allowed to directory or file which the mount owner
can modify without limitation (write access + no sticky bit for
directories)
2) nosuid,nodev mount options are forced
3) any process running with fsuid different from the owner is denied
all access to the filesystem
1) and 2) are ensured by the "fusermount" mount utility which is a
setuid root application doing the actual mount operation.
3) is ensured by a check in the permission() method in kernel
I started thinking about doing 3) in a different way because Christoph
H. made a big deal out of it, saying that FUSE is unacceptable into
mainline in this form.
The suggested use of private namespaces would be OK, but in their
current form have many limitations that make their use impractical (as
discussed in this thread).
Suggested improvements that would address these limitations:
- implement shared subtrees
- allow a process to join an existing namespace (make namespaces
first-class objects)
- implement the namespace creation/joining in a PAM module
With all that in place the check of owner against current->fsuid may
be removed from the FUSE kernel module, without compromising the
security requirements.
Suid programs still interesting questions, since they get access even
to the private namespace causing some information leak (exact
order/timing of filesystem operations performed), giving some
ptrace-like capabilities to unprivileged users. BTW this problem is
not strictly limited to the namespace approach, since suid programs
setting fsuid and accessing users' files will succeed with the current
approach too.
This patch brings the now out-of-date Documentation/filesystems/vfs.txt
back to life. Thanks to Carsten Otte, Trond Myklebust, and Anton
Altaparmakov for their help on updating this documentation.
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Pekka Enberg [Fri, 9 Sep 2005 20:10:16 +0000 (13:10 -0700)]
[PATCH] update kfree, vfree, and vunmap kerneldoc
This patch clarifies NULL handling of kfree() and vfree(). I addition,
wording of calling context restriction for vfree() and vunmap() are changed
from "may not" to "must not."
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi> Acked-by: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Update the hacking guide, before CONFIG_PREEMPT_RT goes in and it needs
rewriting again.
Changes include modernization of quotes, removal of most references to
bottom halves (some mention required because we still use bh in places to
mean softirq).
It would be nice to have a discussion of sparse and various annotations.
Please send patches straight to akpm.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (authored) Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This set of two patches add support for the framebuffer of the Samsung S3C2410
ARM SoC. This driver was started about one year ago and is now used on iPAQ
h1930/h1940, Acer n30 and probably other s3c2410-based machines I'm not aware
of. I've also heard yesterday that it's working also on iPAQ rx3715/rx3115
(s3c2440-based machines).
Signed-Off-By: Arnaud Patard <arnaud.patard@rtp-net.org> Signed-off-by: Antonino Daplas <adaplas@pol.net> Signed-off-by: Ben Dooks <ben@trinity.fluff.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Add ddc/i2c support for i810fb. This will allow the driver to get display
information, especially for monitors with fickle timings. The i2c support
depends on CONFIG_FB_I810_GTF.
Changed __init* to __devinit*
Signed-off-by: Antonino Daplas <adaplas@pol.net> Signed-off-by: Alexander Nyberg <alexn@telia.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] fbcon: Break up bit_putcs into its component functions
The function bit_putcs() in drivers/video/console/bitblit.c is becoming large.
Break it up into its component functions (bit_putcs_unaligned and
bit_putcs_aligned).
Incorporated fb_pad_aligned_buffer() optimization by Roman Zippel.
Richard Purdie [Fri, 9 Sep 2005 20:10:03 +0000 (13:10 -0700)]
[PATCH] pxafb: Add hsync time reporting hook
To solve touchscreen interference problems devices like the Sharp Zaurus
SL-C3000 need to know the length of the horitzontal sync pulses. This patch
adds a hook to pxafb so the touchscreen driver can function correctly.
Signed-Off-By: Richard Purdie <rpurdie@rpsys.net> Signed-off-by: Antonino Daplas <adaplas@pol.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] nvidiafb: Fixed mirrored characters in big endian machines
nvidiafb_imageblit converts the bitdata stream from big_endian to little
endian. This produces mirrored characters when machine is big_endian. Do not
endian convert on big endian machines.