There seem to be no more users or interest in the NEC Osprey evaluation
system for the NEC VR4181 SOC which is an old part anyway, so remove the
code. More information on the Osprey can be found at
http://www.linux-mips.org/wiki/Osprey.
Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] ppc64: replace schedule_timeout() with msleep_interruptible()
Use msleep_interruptible() instead of schedule_timeout() in ppc64-specific
code to cleanup/simplify the sleeping logic. Change the units of the
parameter of do_event_scan_all_cpus() to milliseconds from jiffies. The
return value of rtas_extended_busy_delay_time() was incorrectly being used
as a jiffies value (it is actually milliseconds), which is fixed by using
the value as a parameter to msleep_interruptible(). Also, use
rtas_extended_busy_delay_time() in another case where similar logic is
duplicated.
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Olof Johansson [Sat, 3 Sep 2005 22:55:59 +0000 (15:55 -0700)]
[PATCH] ppc64: Add VMX save flag to VPA
We need to indicate to the hypervisor that it needs to save our VMX
registers when switching partitions on a shared-processor system, just as
it needs to for FP and PMC registers.
This could be made to be on-demand when VMX is used, but we don't do that
for FP nor PMC right now either so let's not overcomplicate things.
Signed-off-by: Olof Johansson <olof@lixom.net> Acked-by: Paul Mackerras <paulus@samba.org> Cc: Anton Blanchard <anton@samba.org> Cc: <engebret@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Mark A. Greer [Sat, 3 Sep 2005 22:55:57 +0000 (15:55 -0700)]
[PATCH] ppc32: katana updates
Update the katana platform support code:
- if booted as zImage, pass mem size in via bi_req from bootwrapper
- if booted as uImage, get mem size from bd_info passed in from u-boot
- add support for 82544 present on katana 752i's
- set cacheline size on pci devices
- some minor fixups
Signed-off-by: Mark A. Greer <mgreer@mvista.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Mark A. Greer [Sat, 3 Sep 2005 22:55:56 +0000 (15:55 -0700)]
[PATCH] ppc32: mv64x60 updates & enhancements
Updates and enhancement to the ppc32 mv64x60 code:
- move code to get mem size from mem ctlr to bootwrapper
- address some errata in the mv64360 pic code
- some minor cleanups
- export one of the bridge's regs via sysfs so user daemon can watch for
extraction events
Signed-off-by: Mark A. Greer <mgreer@mvista.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Add declaration and cacheable_memcpy(). I'll be needing this function in
new 4xx EMAC driver I'm going to submit to netdev soon.
IMHO, the better place for the declaration would be asm-powerpc/string.h,
unfortunately, ppc64 doesn't have this function, so asm-ppc/system.h is the
next best place.
Signed-off-by: Eugene Surovegin <ebs@ebshome.net> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Arthur Othieno [Sat, 3 Sep 2005 22:55:51 +0000 (15:55 -0700)]
[PATCH] ppc32: Re-order cputable for 750CXe DD2.4 entry
"745/755" (pvr_value:0x00083000) is a catch-all entry.
Since arch/ppc/kernel/misc.S:identify_cpu() returns on first match,
move this lower in the table so 750CXe DD2.4 (pvr_value:0x00083214)
may be correctly enumerated.
Signed-off-by: Arthur Othieno <a.othieno@bluewin.ch> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Kumar Gala [Sat, 3 Sep 2005 22:55:50 +0000 (15:55 -0700)]
[PATCH] ppc32: Added PCI support MPC83xx
Adds support for the two PCI busses on MPC83xx and the MPC834x SYS/PIBS
reference board.
The code initializes PCI inbound/outbound windows, allocates and registers
PCI memory/io space. Be aware that setup of the PCI buses on the PIBs
board is expected to be done by the firmware.
Signed-off-by: Tony Li <tony.li@freescale.com> Signed-off-by: Kumar Gala <kumar.gala@freescale.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Lee Nicks [Sat, 3 Sep 2005 22:55:48 +0000 (15:55 -0700)]
[PATCH] ppc32: add support for Marvell EV64360BP board
This patch adds support for Marvell EV64360BP board. So far, it supports
mpsc serial console, gigabit ethernet, jffs2 root filesystem, etc. Other
device support, like watchdog, RTC, will be added later.
Signed-off-by: Lee Nicks <allinux@gmail.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Kumar Gala [Sat, 3 Sep 2005 22:55:47 +0000 (15:55 -0700)]
[PATCH] ppc32: add CONFIG_HZ
While ppc32 has the CONFIG_HZ Kconfig option, it wasnt actually being used.
Connect it up and set all platforms to 250Hz. This pretty much mimics the
ppc64 patch from Anton Blanchard.
Signed-off-by: Kumar Gala <kumar.gala@freescale.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Kumar Gala [Sat, 3 Sep 2005 22:55:46 +0000 (15:55 -0700)]
[PATCH] ppc32: ppc_sys system on chip identification additions
Add the ability to identify an SOC by a name and id. There are cases in
which the integer identifier is not sufficient to specify a specific SOC.
In these cases we can use a string to further qualify the match.
Signed-off-by: Vitaly Bordug <vbordug@ru.mvista.com> Signed-off-by: Kumar Gala <kumar.gala@freescale.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Roland Dreier [Sat, 3 Sep 2005 22:55:43 +0000 (15:55 -0700)]
[PATCH] ppc32: Don't sleep in flush_dcache_icache_page()
flush_dcache_icache_page() will be called on an instruction page fault. We
can't sleep in the fault handler, so use kmap_atomic() instead of just
kmap() for the Book-E case.
Signed-off-by: Roland Dreier <rolandd@cisco.com> Acked-by: Matt Porter <mporter@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Kumar Gala [Sat, 3 Sep 2005 22:55:39 +0000 (15:55 -0700)]
[PATCH] ppc32: Cleaned up global namespace of Book-E watchdog variables
Renamed global variables used to convey if the watchdog is enabled and
periodicity of the timer and moved the declarations into a header for these
variables
Signed-off-by: Matt McClintock <msm@freescale.com> Signed-off-by: Kumar Gala <kumar.gala@freescale.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Kumar Gala [Sat, 3 Sep 2005 22:55:36 +0000 (15:55 -0700)]
[PATCH] cpm_uart: Fix 2nd serial port on MPC8560 ADS
The 2nd serial port on the MPC8560 ADS was not being configured correctly
and thus could not be used as a console. Updated the defconfig for the
board to configure the proper SCC channel for the 2nd serial port.
Signed-off-by: Roy Zang <tie-fei.zang@freescale.com> Signed-off-by: Kumar Gala <kumar.gala@freescale.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Matt Porter [Sat, 3 Sep 2005 22:55:35 +0000 (15:55 -0700)]
[PATCH] ppc32: add phy excluded features to ocp_func_emac_data
This patch adds a field to struct ocp_func_emac_data that allows
platform-specific unsupported PHY features to be passed in to the ibm_emac
ethernet driver.
This patch also adds some logic for the Bamboo eval board to populate this
field based on the dip switches on the board. This is a workaround for the
improperly biased RJ-45 sockets on the Rev. 0 Bamboo.
Signed-off-by: Wade Farnsworth <wfarnsworth@mvista.com> Signed-off-by: Matt Porter <mporter@kernel.crashing.org> Cc: Jeff Garzik <jgarzik@pobox.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Kumar Gala [Sat, 3 Sep 2005 22:55:34 +0000 (15:55 -0700)]
[PATCH] ppc32: Add ppc_sys descriptions for PowerQUICC II devices
Added ppc_sys device and system definitions for PowerQUICC II devices.
This will allow drivers for PQ2 to be proper platform device drivers.
Which can be shared on PQ3 processors with the same peripherals.
Signed-off-by: Matt McClintock <msm@freescale.com> Signed-off-by: Kumar Gala <kumar.gala@freescale.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Kumar Gala [Sat, 3 Sep 2005 22:55:33 +0000 (15:55 -0700)]
[PATCH] ppc32: Added support for the Book-E style Watchdog Timer
PowerPC 40x and Book-E processors support a watchdog timer at the processor
core level. The timer has implementation dependent timeout frequencies
that can be configured by software.
One the first Watchdog timeout we get a critical exception. It is left to
board specific code to determine what should happen at this point. If
nothing is done and another timeout period expires the processor may
attempt to reset the machine.
Stephen Smalley [Sat, 3 Sep 2005 22:55:18 +0000 (15:55 -0700)]
[PATCH] Generic VFS fallback for security xattrs
This patch modifies the VFS setxattr, getxattr, and listxattr code to fall
back to the security module for security xattrs if the filesystem does not
support xattrs natively. This allows security modules to export the incore
inode security label information to userspace even if the filesystem does
not provide xattr storage, and eliminates the need to individually patch
various pseudo filesystem types to provide such access. The patch removes
the existing xattr code from devpts and tmpfs as it is then no longer
needed.
The patch restructures the code flow slightly to reduce duplication between
the normal path and the fallback path, but this should only have one
user-visible side effect - a program may get -EACCES rather than
-EOPNOTSUPP if policy denied access but the filesystem didn't support the
operation anyway. Note that the post_setxattr hook call is not needed in
the fallback case, as the inode_setsecurity hook call handles the incore
inode security state update directly. In contrast, we do call fsnotify in
both cases.
Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov> Acked-by: James Morris <jmorris@namei.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Stephen Smalley [Sat, 3 Sep 2005 22:55:16 +0000 (15:55 -0700)]
[PATCH] selinux: Reduce memory use by avtab
This patch improves memory use by SELinux by both reducing the avtab node
size and reducing the number of avtab nodes. The memory savings are
substantial, e.g. on a 64-bit system after boot, James Morris reported the
following data for the targeted and strict policies:
The improvement in memory use comes at a cost in the speed of security
server computations of access vectors, but these computations are only
required on AVC cache misses, and performance measurements by James Morris
using a number of benchmarks have shown that the change does not cause any
significant degradation.
Note that a rebuilt policy via an updated policy toolchain
(libsepol/checkpolicy) is required in order to gain the full benefits of
this patch, although some memory savings benefits are immediately applied
even to older policies (in particular, the reduction in avtab node size).
Sources for the updated toolchain are presently available from the
sourceforge CVS tree (http://sourceforge.net/cvs/?group_id=21266), and
tarballs are available from http://www.flux.utah.edu/~sds.
Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov> Signed-off-by: James Morris <jmorris@namei.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Martin Hicks [Sat, 3 Sep 2005 22:55:11 +0000 (15:55 -0700)]
[PATCH] VM: add page_state info to per-node meminfo
Add page_state info to the per-node meminfo file in sysfs. This is mostly
just for informational purposes.
The lack of this information was brought up recently during a discussion
regarding pagecache clearing, and I put this patch together to test out one
of the suggestions.
It seems like interesting info to have, so I'm submitting the patch.
Signed-off-by: Martin Hicks <mort@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Mauricio Lin [Sat, 3 Sep 2005 22:55:10 +0000 (15:55 -0700)]
[PATCH] add /proc/pid/smaps
Add a "smaps" entry to /proc/pid: show howmuch memory is resident in each
mapping.
People that want to perform a memory consumption analysing can use it
mainly if someone needs to figure out which libraries can be reduced for
embedded systems. So the new features are the physical size of shared and
clean [or dirty]; private and clean [or dirty].
(Includes a cleanup from "Richard Purdie" <rpurdie@rpsys.net>)
From: Torsten Foertsch <torsten.foertsch@gmx.net>
show_smap calls first show_map and then prints its additional information to
the seq_file. show_map checks if all it has to print fits into the buffer and
if yes marks the current vma as written. While that is correct for show_map
it is not for show_smap. Here the vma should be marked as written only after
the additional information is also written.
The attached patch cures the problem. It moves the functionality of the
show_map function to a new function show_map_internal that is called with an
additional struct mem_size_stats* argument. Then show_map calls
show_map_internal with NULL as struct mem_size_stats* whereas show_smap calls
it with a real pointer. Now the final
if (m->count < m->size) /* vma is copied successfully */
m->version = (vma != get_gate_vma(task))? vma->vm_start: 0;
is done only if the whole entry fits into the buffer.
Proposed by and based on a patch from Eric Dumazet <dada1@cosmosbay.com>:
This patch removes unnecessary critical section in ksize() function, as
cli/sti are rather expensive on modern CPUS.
It additionally adds a docbook entry for ksize() and further simplifies the
code.
Eric Dumazet [Sat, 3 Sep 2005 22:55:06 +0000 (15:55 -0700)]
[PATCH] mm/slab.c: prefetchw the start of new allocated objects
Mostobjects returned by __cache_alloc() will be written by the caller,
(but not all callers want to write all the object, but just at the
begining) prefetchw() tells the modern CPU to think about the future
writes, ie start some memory transactions in advance.
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] x86_64: avoid some atomic operations during address space destruction
Any architecture that has hardware updated A/D bits that require
synchronization against other processors during PTE operations can benefit
from doing non-atomic PTE updates during address space destruction.
Originally done on i386, now ported to x86_64.
Doing a read/write pair instead of an xchg() operation saves the implicit
lock, which turns out to be a big win on 32-bit (esp w PAE).
Add a new accessor for PTEs, which passes the full hint from the mmu_gather
struct; this allows architectures with hardware pagetables to optimize away
atomic PTE operations when destroying an address space. Removing the
locked operation should allow better pipelining of memory access in this
loop. I measured an average savings of 30-35 cycles per zap_pte_range on
the first 500 destructions on Pentium-M, but I believe the optimization
would win more on older processors which still assert the bus lock on xchg
for an exclusive cacheline.
Update: I made some new measurements, and this saves exactly 26 cycles over
ptep_get_and_clear on Pentium M. On P4, with a PAE kernel, this saves 180
cycles per ptep_get_and_clear, for a whopping 92160 cycles savings for a
full address space destruction.
pte_clear_full is not yet used, but is provided for future optimizations
(in particular, when running inside of a hypervisor that queues page table
updates, the full hint allows us to avoid queueing unnecessary page table
update for an address space in the process of being destroyed.
This is not a huge win, but it does help a bit, and sets the stage for
further hypervisor optimization of the mm layer on all architectures.
Signed-off-by: Zachary Amsden <zach@vmware.com> Cc: Christoph Lameter <christoph@lameter.com> Cc: <linux-mm@kvack.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] remove hugetlb_clean_stale_pgtable() and fix huge_pte_alloc()
I don't think we need to call hugetlb_clean_stale_pgtable() anymore
in 2.6.13 because of the rework with free_pgtables(). It now collect
all the pte page at the time of munmap. It used to only collect page
table pages when entire one pgd can be freed and left with staled pte
pages. Not anymore with 2.6.13. This function will never be called
and We should turn it into a BUG_ON.
I also spotted two problems here, not Adam's fault :-)
(1) in huge_pte_alloc(), it looks like a bug to me that pud is not
checked before calling pmd_alloc()
(2) in hugetlb_clean_stale_pgtable(), it also missed a call to
pmd_free_tlb. I think a tlb flush is required to flush the mapping
for the page table itself when we clear out the pmd pointing to a
pte page. However, since hugetlb_clean_stale_pgtable() is never
called, so it won't trigger the bug.
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Cc: Adam Litke <agl@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Adam Litke [Sat, 3 Sep 2005 22:55:01 +0000 (15:55 -0700)]
[PATCH] hugetlb: check p?d_present in huge_pte_offset()
For demand faulting, we cannot assume that the page tables will be
populated. Do what the rest of the architectures do and test p?d_present()
while walking down the page table.
Signed-off-by: Adam Litke <agl@us.ibm.com> Cc: <linux-mm@kvack.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Adam Litke [Sat, 3 Sep 2005 22:55:00 +0000 (15:55 -0700)]
[PATCH] hugetlb: move stale pte check into huge_pte_alloc()
Initial Post (Wed, 17 Aug 2005)
This patch moves the
if (! pte_none(*pte))
hugetlb_clean_stale_pgtable(pte);
logic into huge_pte_alloc() so all of its callers can be immune to the bug
described by Kenneth Chen at http://lkml.org/lkml/2004/6/16/246
> It turns out there is a bug in hugetlb_prefault(): with 3 level page table,
> huge_pte_alloc() might return a pmd that points to a PTE page. It happens
> if the virtual address for hugetlb mmap is recycled from previously used
> normal page mmap. free_pgtables() might not scrub the pmd entry on
> munmap and hugetlb_prefault skips on any pmd presence regardless what type
> it is.
Unless I am missing something, it seems more correct to place the check inside
huge_pte_alloc() to prevent a the same bug wherever a huge pte is allocated.
It also allows checking for this condition when lazily faulting huge pages
later in the series.
Signed-off-by: Adam Litke <agl@us.ibm.com> Cc: <linux-mm@kvack.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Adam Litke [Sat, 3 Sep 2005 22:54:59 +0000 (15:54 -0700)]
[PATCH] hugetlb: add pte_huge() macro
This patch adds a macro pte_huge(pte) for i386/x86_64 which is needed by a
patch later in the series. Instead of repeating (_PAGE_PRESENT |
_PAGE_PSE), I've added __LARGE_PTE to i386 to match x86_64.
Signed-off-by: Adam Litke <agl@us.ibm.com> Cc: <linux-mm@kvack.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] arm: allow for arch-specific IOREMAP_MAX_ORDER
Version 6 of the ARM architecture introduces the concept of 16MB pages
(supersections) and 36-bit (40-bit actually, but nobody uses this) physical
addresses. 36-bit addressed memory and I/O and ARMv6 can only be mapped
using supersections and the requirement on these is that both virtual and
physical addresses be 16MB aligned. In trying to add support for ioremap()
of 36-bit I/O, we run into the issue that get_vm_area() allows for a
maximum of 512K alignment via the IOREMAP_MAX_ORDER constant. To work
around this, we can:
- Allocate a larger VM area than needed (size + (1ul << IOREMAP_MAX_ORDER))
and then align the pointer ourselves, but this ends up with 512K of
wasted VM per ioremap().
- Provide a new __get_vm_area_aligned() API and make __get_vm_area() sit
on top of this. I did this and it works but I don't like the idea
adding another VM API just for this one case.
- My preferred solution which is to allow the architecture to override
the IOREMAP_MAX_ORDER constant with it's own version.
Signed-off-by: Deepak Saxena <dsaxena@plexity.net> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
_PAGE_FILE does not indicate whether a file is in page / swap cache, it is
set just for non-linear PTE's. Correct the comment for i386, x86_64, UML.
Also clearify _PAGE_NONE.
Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
If !vma->vm-ops we already BUG above, so retesting it is useless. The
compiler cannot optimize this because BUG is a macro and is not thus marked
noreturn; that should possibly be fixed.
Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] shmem_populate: avoid an useless check, and some comments
Either shmem_getpage returns a failure, or it found a page, or it was told
it couldn't do any I/O. So it's useless to check nonblock in the else
branch. We could add a BUG() there but I preferred to comment the
offending function.
This was taken out from one Ingo Molnar's old patch I'm resurrecting.
Better late than never, I've at last reviewed the madvise vma merging
going into 2.6.13. Remove a pointless check and fix two little bugs -
a simple test (with /proc/<pid>/maps hacked to show ReadHints) showed
both mismerges in practice: though being madvise, neither was disastrous.
1. Correct placement of the success label in madvise_behavior: as in
mprotect_fixup and mlock_fixup, it is necessary to update vm_flags
when vma_merge succeeds (to handle the exceptional Case 8 noted in
the comments above vma_merge itself).
2. Correct initial value of prev when starting part way into a vma: as
in sys_mprotect and do_mlock, it needs to be set to vma in this case
(vma_merge handles only that minimum of cases shown in its comments).
3. If find_vma_prev sets prev, then the vma it returns is prev->vm_next,
so it's pointless to make that same assignment again in sys_madvise.
Nick Piggin [Sat, 3 Sep 2005 22:54:49 +0000 (15:54 -0700)]
[PATCH] mm: remap ZERO_PAGE mappings
filemap_xip's nopage routine maps the ZERO_PAGE into readonly mappings, if it
has no data page to map there: then if the hole in the file is later filled,
__xip_unmap uses an rmap technique to replace the ZERO_PAGEs mapped for that
offset by the newly allocated file page, so that established mappings will see
the newly written data.
However, on MIPS (alone) there's not one but as many as eight ZERO_PAGEs,
chosen for coloring by user virtual address; and if mremap has meanwhile been
used to move a mapping containing a ZERO_PAGE, it will generally not match the
ZERO_PAGE(address) __xip_unmap is looking for.
To maintain XIP's established mappings correctly on MIPS, we need Nick's fix
to mremap's move_one_page (originally presented as an optimization), to
replace the ZERO_PAGE appropriate to the old address by the ZERO_PAGE
appropriate to the new address.
(But when I first saw this, I was thinking the ZERO_PAGEs themselves would get
corrupted, very bad. Now I think it's the other way round, that the
established mappings will fail to see the newly written data: incorrect, but
not corrupting everything else. Whether filemap_xip's technique is generally
safe, I'd hesitate to say in a hurry: it's interesting, but we've never tried
to do that in tmpfs.)
Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Nick Piggin [Sat, 3 Sep 2005 22:54:47 +0000 (15:54 -0700)]
[PATCH] mm: micro-optimise rmap
Microoptimise page_add_anon_rmap. Although these expressions are used only in
the taken branch of the if() statement, the compiler can't reorder them inside
because atomic_inc_and_test is a barrier.
Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] /proc/<pid>/numa_maps to show on which nodes pages reside
This patch was recently discussed on linux-mm:
http://marc.theaimsgroup.com/?t=112085728500002&r=1&w=2
I inherited a large code base from Ray for page migration. There was a
small patch in there that I find to be very useful since it allows the
display of the locality of the pages in use by a process. I reworked that
patch and came up with a /proc/<pid>/numa_maps that gives more information
about the vma's of a process. numa_maps is indexes by the start address
found in /proc/<pid>/maps. F.e. with this patch you can see the page use
of the "getty" process:
getty uses ld.so. The first vma is the code segment which is used by 43
other processes and the pages are evenly distributed over the 4 nodes.
The second vma is the process specific data portion for ld.so. This is
only one page.
The display format is:
<startaddress> Links to information in /proc/<pid>/map
<memory policy> This can be "default" "interleave={}", "prefer=<node>" or "bind={<zones>}"
MaxRef= <maximum reference to a page in this vma>
Pages= <Nr of pages in use>
Mapped= <Nr of pages with mapcount >
Anon= <nr of anonymous pages>
Nx= <Nr of pages on Node x>
The content of the proc-file is self-evident. If this would be tied into
the sparsemem system then the contents of this file would not be too
useful.
Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Remove the three get_mm_counter(mm, rss) tests from rmap.c: there was a
time when testing rss was important to avoid a particular race between
dup_mmap and the anonmm rmap; but now it's just a rather silly pseudo-
optimization, made even more obscure by the get_mm_counter macro.
Three of the four BUG_ONs in delete_from_swap_cache are immediately
repeated in __delete_from_swap_cache: delete those and add the one. But
perhaps mm/ is altogether overprovisioned with historic BUGs?
Aha, swsusp dips into swap_info[], better update it to swap_lock. It's
bitflipping flags with 0xFF, so get_swap_page will allocate from only the one
chosen device: let's change that to flip SWP_WRITEOK.
The idea of a swap_device_lock per device, and a swap_list_lock over them all,
is appealing; but in practice almost every holder of swap_device_lock must
already hold swap_list_lock, which defeats the purpose of the split.
The only exceptions have been swap_duplicate, valid_swaphandles and an
untrodden path in try_to_unuse (plus a few places added in this series).
valid_swaphandles doesn't show up high in profiles, but swap_duplicate does
demand attention. However, with the hold time in get_swap_pages so much
reduced, I've not yet found a load and set of swap device priorities to show
even swap_duplicate benefitting from the split. Certainly the split is mere
overhead in the common case of a single swap device.
So, replace swap_list_lock and swap_device_lock by spinlock_t swap_lock
(generally we seem to prefer an _ in the name, and not hide in a macro).
If someone can show a regression in swap_duplicate, then probably we should
add a hashlock for the swap_map entries alone (shorts being anatomic), so as
to help the case of the single swap device too.
get_swap_page has often shown up on latency traces, doing lengthy scans while
holding two spinlocks. swap_list_lock is already dropped, now scan_swap_map
drop swap_device_lock before scanning the swap_map.
While scanning for an empty cluster, don't worry that racing tasks may
allocate what was free and free what was allocated; but when allocating an
entry, check it's still free after retaking the lock. Avoid dropping the lock
in the expected common path. No barriers beyond the locks, just let the
cookie crumble; highest_bit limit is volatile, but benign.
Guard against swapoff: must check SWP_WRITEOK before allocating, must raise
SWP_SCANNING reference count while in scan_swap_map, swapoff wait for that to
fall - just use schedule_timeout, we don't want to burden scan_swap_map
itself, and it's very unlikely that anyone can really still be in
scan_swap_map once swapoff gets this far.
Rewrite scan_swap_map to allocate in just the same way as before (taking the
next free entry SWAPFILE_CLUSTER-1 times, then restarting at the lowest wholly
empty cluster, falling back to lowest entry if none), but with a view towards
dropping the lock in the next patch.
Rewrite get_swap_page to allocate in just the same sequence as before, but
without holding swap_list_lock across its scan_swap_map. Decrement
nr_swap_pages and update swap_list.next in advance, while still holding
swap_list_lock. Skip full devices by testing highest_bit. Swapoff hold
swap_device_lock as well as swap_list_lock to clear SWP_WRITEOK. Reduces lock
contention when there are parallel swap devices of the same priority.
This makes negligible difference in practice: but swap_list.next should not be
updated to a higher prio in the general helper swap_info_get, but rather in
swap_entry_free; and then only in the case when entry is actually freed.
The swap header's unsigned int last_page determines the range of swap pages,
but swap_info has been using int or unsigned long in some cases: use unsigned
int throughout (except, in several places a local unsigned long is useful to
avoid overflows when adding).
The "Adding %dk swap" message shows the number of swap extents, as a guide to
how fragmented the swapfile may be. But a useful further guide is what total
extent they span across (sometimes scarily large).
And there's no need to keep nr_extents in swap_info: it's unused after the
initial message, so save a little space by keeping it on stack.
There are several comments that swap's extent_list.prev points to the lowest
extent: that's not so, it's extent_list.next which points to it, as you'd
expect. And a couple of loops in add_swap_extent which go all the way through
the list, when they should just add to the other end.
Fix those up, and let map_swap_page search the list forwards: profiles shows
it to be twice as quick that way - because prefetch works better on how the
structs are typically kmalloc'ed? or because usually more is written to than
read from swap, and swap is allocated ascendingly?
sys_swapon's call to destroy_swap_extents on failure is made after the final
swap_list_unlock, which is faintly unsafe: another sys_swapon might already be
setting up that swap_info_struct. Calling it earlier, before taking
swap_list_lock, is safe. sys_swapoff's call to destroy_swap_extents was safe,
but likewise move it earlier, before taking the locks (once try_to_unuse has
completed, nothing can be needing the swap extents).
If a regular swapfile lies on a filesystem whose blocksize is less than
PAGE_SIZE, then setup_swap_extents may have to cut the number of usable swap
pages; but sys_swapon's nr_good_pages was not expecting that. Also,
setup_swap_extents takes no account of badpages listed in the swap header: not
worth doing so, but ensure nr_badpages is 0 for a regular swapfile.
Someone mentioned that almost all the architectures used basically the same
implementation of get_order. This patch consolidates them into
asm-generic/page.h and includes that in the appropriate places. The
exceptions are ia64 and ppc which have their own (presumably optimised)
versions.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>