]> git.karo-electronics.de Git - linux-beck.git/log
linux-beck.git
18 years ago[IA64] New IA64 core/thread detection patch
Fenghua Yu [Tue, 28 Feb 2006 00:16:22 +0000 (16:16 -0800)]
[IA64] New IA64 core/thread detection patch

IPF SDM 2.2 changes definition of PAL_LOGICAL_TO_PHYSICAL to add
proc_number=-1 to get core/thread mapping info on the running processer.

Based on this change, we had better to update existing core/thread
detection in IA64 kernel correspondingly. The attached patch implements
this change. It simplifies detection code and eliminates potential race
condition. It also runs a bit faster and has better scalability especially
when cores and threads number grows up in one package.

Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] Increase max node count on SN platforms
Jack Steiner [Thu, 2 Mar 2006 22:02:32 +0000 (16:02 -0600)]
[IA64] Increase max node count on SN platforms

Update configuration files with new CONFIG option.

Signed-off-by: Jack Steiner <steiner@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] Increase max node count on SN platforms
Jack Steiner [Thu, 2 Mar 2006 22:02:28 +0000 (16:02 -0600)]
[IA64] Increase max node count on SN platforms

Node number are kept in the cpu_to_node_map which is
currently defined as u8. Change to u16 to accomodate
larger node numbers.

Signed-off-by: Jack Steiner <steiner@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] Increase max node count on SN platforms
Jack Steiner [Thu, 2 Mar 2006 22:02:25 +0000 (16:02 -0600)]
[IA64] Increase max node count on SN platforms

Add support in IA64 acpi for platforms that support more than
256 nodes. Currently, ACPI is limited to 256 nodes because the
proximity domain number is 8-bits.

Long term, we expect to use ACPI3.0 to support >256 nodes.
This patch is an interim solution that works with platforms
that pass the  high order bits of the proximity domain in
"reserved" fields of the ACPI tables. This code is enabled
ONLY on SN platforms.

Signed-off-by: Jack Steiner <steiner@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] Increase max node count on SN platforms
Jack Steiner [Thu, 2 Mar 2006 22:02:21 +0000 (16:02 -0600)]
[IA64] Increase max node count on SN platforms

Add a configuration option to allow the maximum
number of nodes to be configurable for GENERIC or SN
kernels.

Signed-off-by: Jack Steiner <steiner@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] Tollhouse HP: IA64 arch changes
Prarit Bhargava [Wed, 8 Mar 2006 18:30:18 +0000 (13:30 -0500)]
[IA64] Tollhouse HP: IA64 arch changes

arch/ia64/sn and include/asm-ia64/sn changes required to support Tollhouse
system PCI hotplug, fixes the ia64_sn_sysctl_ioboard_get call, and introduces
the PRF_HOTPLUG_SUPPORT feature bit.

Signed-off-by: Prarit Bhargava <prarit@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] cleanup dig_irq_init
Chen, Kenneth W [Sun, 12 Mar 2006 19:10:00 +0000 (11:10 -0800)]
[IA64] cleanup dig_irq_init

dig_irq_init is equivalent to machvec_noop, no need to define
another empty function.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] MCA recovery: kernel context recovery table
Russ Anderson [Fri, 24 Mar 2006 17:49:52 +0000 (09:49 -0800)]
[IA64] MCA recovery: kernel context recovery table

Memory errors encountered by user applications may surface
when the CPU is running in kernel context.  The current code
will not attempt recovery if the MCA surfaces in kernel
context (privilage mode 0).  This patch adds a check for cases
where the user initiated the load that surfaces in kernel
interrupt code.

An example is a user process lauching a load from memory
and the data in memory had bad ECC.  Before the bad data
gets to the CPU register, and interrupt comes in.  The
code jumps to the IVT interrupt entry point and begins
execution in kernel context.  The process of saving the
user registers (SAVE_REST) causes the bad data to be loaded
into a CPU register, triggering the MCA.  The MCA surfaces in
kernel context, even though the load was initiated from
user context.

As suggested by David and Tony, this patch uses an exception
table like approach, puting the tagged recovery addresses in
a searchable table.  One difference from the exception table
is that MCAs do not surface in precise places (such as with
a TLB miss), so instead of tagging specific instructions,
address ranges are registers.  A single macro is used to do
the tagging, with the input parameter being the label
of the starting address and the macro being the ending
address.  This limits clutter in the code.

This patch only tags one spot, the interrupt ivt entry.
Testing showed that spot to be a "heavy hitter" with
MCAs surfacing while saving user registers.  Other spots
can be added as needed by adding a single macro.

Signed-off-by: Russ Anderson (rja@sgi.com)
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years agoIA64: Use early_parm to handle mvec_name and nomca
Horms [Thu, 23 Mar 2006 22:27:12 +0000 (14:27 -0800)]
IA64: Use early_parm to handle mvec_name and nomca

I'm not sure of the worthiness of this idea, so please consider it an RFC.
Its key merits are:

* Reuse existing infrastructure
* Greatly tightens up the parsing of nomca
* Greatly simplifies the parsing of machvec

Addition cleanup (moving setup_mvec() to machvec.c) by Ken Chen.

Signed-Off-By: Horms <horms@verge.net.au>
Signed-Off-By: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] move patchlist and machvec into init section
Chen, Kenneth W [Sun, 12 Mar 2006 17:20:27 +0000 (09:20 -0800)]
[IA64] move patchlist and machvec into init section

ia64_mv is initialized based on platform detected or specified.
However, there is one instantiation of each platform type.  We
don't expect to switch platform vector during run time.  Move
those platform specific type into init section since a copy is
made into global ia64_mv at initialization.

Also move instruction patch list into init section as well.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] add init declaration - nolwsys
Chen, Kenneth W [Sun, 12 Mar 2006 17:10:59 +0000 (09:10 -0800)]
[IA64] add init declaration - nolwsys

Add __initdata to nolwsys.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] add init declaration - gate page functions
Chen, Kenneth W [Sun, 12 Mar 2006 17:08:26 +0000 (09:08 -0800)]
[IA64] add init declaration - gate page functions

Add init declaration to bunch of patch functions and gate
page setup function.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] add init declaration to memory initialization functions
Chen, Kenneth W [Thu, 23 Mar 2006 00:54:15 +0000 (16:54 -0800)]
[IA64] add init declaration to memory initialization functions

Add init declaration to variables/functions used for memory
initialization.  I don't think they would clash with memory
hotplug.  If they do, please yell.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] add init declaration to cpu initialization functions
Chen, Kenneth W [Sun, 12 Mar 2006 17:00:13 +0000 (09:00 -0800)]
[IA64] add init declaration to cpu initialization functions

Add init declaration to cpu initialization functions.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] add __init declaration to mca functions
Chen, Kenneth W [Sun, 12 Mar 2006 16:52:20 +0000 (08:52 -0800)]
[IA64] add __init declaration to mca functions

Mark init related variable and functions with appropriate
__init* declaration to mca functions.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] Ignore disabled Local SAPIC Affinity Structure in SRAT
Kenji Kaneshige [Wed, 15 Mar 2006 05:45:11 +0000 (14:45 +0900)]
[IA64] Ignore disabled Local SAPIC Affinity Structure in SRAT

According to the ACPI spec, the OSPM must ignore the contents of the
Processor Local APIC/SAPIC Affinity Structure in System Resource
Affinity Table (SRAT), if its enable flag is cleared. However, ia64
linux refers all of the Processor Local APIC/SAPIC Affinity Structures
in SRAT regardless of the enable flag. This is obviously against the
ACPI spec. This patch fixes this bug.

Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] sn_check_intr: use ia64_get_irr()
Bjorn Helgaas [Tue, 21 Mar 2006 17:44:07 +0000 (10:44 -0700)]
[IA64] sn_check_intr: use ia64_get_irr()

Use the recently-added ia64_get_irr() rather than duplicating the code.

Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Acked-by: Jes Sorensen <jes@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years ago[IA64] fix ia64 is_hugepage_only_range
Chen, Kenneth W [Wed, 22 Mar 2006 18:49:00 +0000 (10:49 -0800)]
[IA64] fix ia64 is_hugepage_only_range

fix is_hugepage_only_range() definition to be "overlaps"
instead of "within architectural restricted hugetlb address
range".  Simplify the ia64 specific code that used to use
is_hugepage_only_range() to just check which region the
address is in.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
18 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/perex/alsa
Linus Torvalds [Wed, 22 Mar 2006 18:59:20 +0000 (10:59 -0800)]
Merge git://git.kernel.org/pub/scm/linux/kernel/git/perex/alsa

* git://git.kernel.org/pub/scm/linux/kernel/git/perex/alsa: (124 commits)
  [ALSA] version 1.0.11rc4
  [PATCH] Intruduce DMA_28BIT_MASK
  [ALSA] hda-codec - Add support for ASUS P4GPL-X
  [ALSA] hda-codec - Add support for HP nx9420 laptop
  [ALSA] Fix memory leaks in error path of control.c
  [ALSA] AMD Au1x00: AC'97 controller is memory mapped
  [ALSA] AMD Au1x00: fix DMA init/cleanup
  [ALSA] hda-codec - Fix generic auto-configurator
  [ALSA] hda-codec - Fix BIOS auto-configuration
  [ALSA] Fixes typos in Audiophile-USB.txt
  [ALSA] ice1712 - typo fixes for dxr_enable module option
  [ALSA] AMD Au1x00: make driver build after cleanup
  [ALSA] ice1712 - Fix wrong value types for enum items
  [ALSA] fix resource leak in usbmixer
  [ALSA] Fix gus_pcm dereference before NULL
  [ALSA] Fix seq_clientmgr dereferences before NULL check
  [ALSA] hda-codec - Fix for Samsung R65 and ASUS A6J
  [ALSA] hda-codec - Add support for VAIO FE550G and SZ110
  [ALSA] usb-audio: add Maya44 mixer control names
  [ALSA] usb-audio: add Casio PL-40R support
  ...

18 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial
Linus Torvalds [Wed, 22 Mar 2006 18:58:05 +0000 (10:58 -0800)]
Merge git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial

* git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial:
  fixed path to moved file in include/linux/device.h
  Fix spelling in E1000_DISABLE_PACKET_SPLIT Kconfig description
  Documentation/dvb/get_dvb_firmware: fix firmware URL
  Documentation: Update to BUG-HUNTING
  Remove superfluous NOTIFY_COOKIE_LEN define
  add "tags" to .gitignore
  Fix "frist", "fisrt", typos
  fix rwlock usage example
  It's UTF-8

18 years agoMerge master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6
Linus Torvalds [Wed, 22 Mar 2006 18:56:57 +0000 (10:56 -0800)]
Merge master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6

* master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6:
  [SPARC64]: Add a secondary TSB for hugepage mappings.
  [SPARC]: Respect vm_page_prot in io_remap_page_range().

18 years agoMerge master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
Linus Torvalds [Wed, 22 Mar 2006 18:56:23 +0000 (10:56 -0800)]
Merge master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6

* master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6:
  [TG3]: Bump driver version and reldate.
  [TG3]: Skip phy power down on some devices
  [TG3]: Fix SRAM access during tg3_init_one()
  [X25]: dte facilities 32 64 ioctl conversion
  [X25]: allow ITU-T DTE facilities for x25
  [X25]: fix kernel error message 64 bit kernel
  [X25]: ioctl conversion 32 bit user to 64 bit kernel
  [NET]: socket timestamp 32 bit handler for 64 bit kernel
  [NET]: allow 32 bit socket ioctl in 64 bit kernel
  [BLUETOOTH]: Return negative error constant

18 years agoMerge master.kernel.org:/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6
Linus Torvalds [Wed, 22 Mar 2006 18:47:24 +0000 (10:47 -0800)]
Merge master.kernel.org:/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6

* master.kernel.org:/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6: (138 commits)
  [SCSI] libata: implement minimal transport template for ->eh_timed_out
  [SCSI] eliminate rphy allocation in favour of expander/end device allocation
  [SCSI] convert mptsas over to end_device/expander allocations
  [SCSI] allow displaying and setting of cache type via sysfs
  [SCSI] add scsi_mode_select to scsi_lib.c
  [SCSI] 3ware 9000 add big endian support
  [SCSI] qla2xxx: update MAINTAINERS
  [SCSI] scsi: move target_destroy call
  [SCSI] fusion - bump version
  [SCSI] fusion - expander hotplug suport in mptsas module
  [SCSI] fusion - exposing raid components in mptsas
  [SCSI] fusion - memory leak, and initializing fields
  [SCSI] fusion - exclosure misspelled
  [SCSI] fusion - cleanup mptsas event handling functions
  [SCSI] fusion - removing target_id/bus_id from the VirtDevice structure
  [SCSI] fusion - static fix's
  [SCSI] fusion - move some debug firmware event debug msgs to verbose level
  [SCSI] fusion - loginfo header update
  [SCSI] add scsi_reprobe_device
  [SCSI] megaraid_sas: fix extended timeout handling
  ...

18 years ago[PATCH] SELinux: add slab cache for inode security struct
James Morris [Wed, 22 Mar 2006 08:09:22 +0000 (00:09 -0800)]
[PATCH] SELinux: add slab cache for inode security struct

Add a slab cache for the SELinux inode security struct, one of which is
allocated for every inode instantiated by the system.

The memory savings are considerable.

On 64-bit, instead of the size-128 cache, we have a slab object of 96
bytes, saving 32 bytes per object.  After booting, I see about 4000 of
these and then about 17,000 after a kernel compile.  With this patch, we
save around 530KB of kernel memory in the latter case.  On 32-bit, the
savings are about half of this.

Signed-off-by: James Morris <jmorris@namei.org>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] SELinux: cleanup stray variable in selinux_inode_init_security()
James Morris [Wed, 22 Mar 2006 08:09:21 +0000 (00:09 -0800)]
[PATCH] SELinux: cleanup stray variable in selinux_inode_init_security()

Remove an unneded pointer variable in selinux_inode_init_security().

Signed-off-by: James Morris <jmorris@namei.org>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] SELinux: fix hard link count for selinuxfs root directory
James Morris [Wed, 22 Mar 2006 08:09:20 +0000 (00:09 -0800)]
[PATCH] SELinux: fix hard link count for selinuxfs root directory

A further fix is needed for selinuxfs link count management, to ensure that
the count is correct for the parent directory when a subdirectory is
created.  This is only required for the root directory currently, but the
code has been updated for the general case.

Signed-off-by: James Morris <jmorris@namei.org>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] selinuxfs cleanups: sel_make_avc_files
James Morris [Wed, 22 Mar 2006 08:09:19 +0000 (00:09 -0800)]
[PATCH] selinuxfs cleanups: sel_make_avc_files

Fix copy & paste error in sel_make_avc_files(), removing a supurious call to
d_genocide() in the error path.  All of this will be cleaned up by
kill_litter_super().

Signed-off-by: James Morris <jmorris@namei.org>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] selinuxfs cleanups: sel_make_bools
James Morris [Wed, 22 Mar 2006 08:09:18 +0000 (00:09 -0800)]
[PATCH] selinuxfs cleanups: sel_make_bools

Remove the call to sel_make_bools() from sel_fill_super(), as policy needs to
be loaded before the boolean files can be created.  Policy will never be
loaded during sel_fill_super() as selinuxfs is kernel mounted during init and
the only means to load policy is via selinuxfs.

Also, the call to d_genocide() on the error path of sel_make_bools() is
incorrect and replaced with sel_remove_bools().

Signed-off-by: James Morris <jmorris@namei.org>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] selinuxfs cleanups: sel_fill_super exit path
James Morris [Wed, 22 Mar 2006 08:09:17 +0000 (00:09 -0800)]
[PATCH] selinuxfs cleanups: sel_fill_super exit path

Unify the error path of sel_fill_super() so that all errors pass through the
same point and generate an error message.  Also, removes a spurious dput() in
the error path which breaks the refcounting for the filesystem
(litter_kill_super() will correctly clean things up itself on error).

Signed-off-by: James Morris <jmorris@namei.org>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] selinuxfs cleanups: use sel_make_dir()
James Morris [Wed, 22 Mar 2006 08:09:17 +0000 (00:09 -0800)]
[PATCH] selinuxfs cleanups: use sel_make_dir()

Use existing sel_make_dir() helper to create booleans directory rather than
duplicating the logic.

Signed-off-by: James Morris <jmorris@namei.org>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] selinuxfs cleanups: fix hard link count
James Morris [Wed, 22 Mar 2006 08:09:16 +0000 (00:09 -0800)]
[PATCH] selinuxfs cleanups: fix hard link count

Fix the hard link count for selinuxfs directories, which are currently one
short.

Signed-off-by: James Morris <jmorris@namei.org>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] selinux: simplify sel_read_bool
Stephen Smalley [Wed, 22 Mar 2006 08:09:15 +0000 (00:09 -0800)]
[PATCH] selinux: simplify sel_read_bool

Simplify sel_read_bool to use the simple_read_from_buffer helper, like the
other selinuxfs functions.

Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov>
Acked-by: James Morris <jmorris@namei.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] sem2mutex: security/
Ingo Molnar [Wed, 22 Mar 2006 08:09:14 +0000 (00:09 -0800)]
[PATCH] sem2mutex: security/

Semaphore to mutex conversion.

The conversion was generated via scripts, and the result was validated
automatically via a script as well.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Stephen Smalley <sds@epoch.ncsc.mil>
Cc: James Morris <jmorris@namei.org>
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] selinux: Disable automatic labeling of new inodes when no policy is loaded
Stephen Smalley [Wed, 22 Mar 2006 08:09:13 +0000 (00:09 -0800)]
[PATCH] selinux: Disable automatic labeling of new inodes when no policy is loaded

This patch disables the automatic labeling of new inodes on disk
when no policy is loaded.

Discussion is here:
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=180296

In short, we're changing the behavior so that when no policy is loaded,
SELinux does not label files at all.  Currently it does add an 'unlabeled'
label in this case, which we've found causes problems later.

SELinux always maintains a safe internal label if there is none, so with this
patch, we just stick with that and wait until a policy is loaded before adding
a persistent label on disk.

The effect is simply that if you boot with SELinux enabled but no policy
loaded and create a file in that state, SELinux won't try to set a security
extended attribute on the new inode on the disk.  This is the only sane
behavior for SELinux in that state, as it cannot determine the right label to
assign in the absence of a policy.  That state usually doesn't occur, but the
rawhide installer seemed to be misbehaving temporarily so it happened to show
up on a test install.

Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov>
Acked-by: James Morris <jmorris@namei.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] page migration reorg
Christoph Lameter [Wed, 22 Mar 2006 08:09:12 +0000 (00:09 -0800)]
[PATCH] page migration reorg

Centralize the page migration functions in anticipation of additional
tinkering.  Creates a new file mm/migrate.c

1. Extract buffer_migrate_page() from fs/buffer.c

2. Extract central migration code from vmscan.c

3. Extract some components from mempolicy.c

4. Export pageout() and remove_from_swap() from vmscan.c

5. Make it possible to configure NUMA systems without page migration
   and non-NUMA systems with page migration.

I had to so some #ifdeffing in mempolicy.c that may need a cleanup.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: slab cache interleave rotor fix
Paul Jackson [Wed, 22 Mar 2006 08:09:11 +0000 (00:09 -0800)]
[PATCH] mm: slab cache interleave rotor fix

The alien cache rotor in mm/slab.c assumes that the first online node is
node 0.  Eventually for some archs, especially with hotplug, this will no
longer be true.

Fix the interleave rotor to handle the general case of node numbering.

Signed-off-by: Paul Jackson <pj@sgi.com>
Acked-by: Christoph Lameter <clameter@engr.sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: hugetlb alloc_fresh_huge_page bogus node loop fix
Paul Jackson [Wed, 22 Mar 2006 08:09:10 +0000 (00:09 -0800)]
[PATCH] mm: hugetlb alloc_fresh_huge_page bogus node loop fix

Fix bogus node loop in hugetlb.c alloc_fresh_huge_page(), which was
assuming that nodes are numbered contiguously from 0 to num_online_nodes().
Once the hotplug folks get this far, that will be false.

Signed-off-by: Paul Jackson <pj@sgi.com>
Acked-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] fix swap cluster offset
Akinobu Mita [Wed, 22 Mar 2006 08:09:09 +0000 (00:09 -0800)]
[PATCH] fix swap cluster offset

When we've allocated SWAPFILE_CLUSTER pages, ->cluster_next should be the
first index of swap cluster.  But current code probably sets it wrong offset.

Signed-off-by: Akinobu Mita <mita@miraclelinux.com>
Acked-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] drain_node_pages: interrupt latency reduction / optimization
Christoph Lameter [Wed, 22 Mar 2006 08:09:08 +0000 (00:09 -0800)]
[PATCH] drain_node_pages: interrupt latency reduction / optimization

1. Only disable interrupts if there is actually something to free

2. Only dirty the pcp cacheline if we actually freed something.

3. Disable interrupts for each single pcp and not for cleaning
  all the pcps in all zones of a node.

drain_node_pages is called every 2 seconds from cache_reap. This
fix should avoid most disabling of interrupts.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] slab: fix drain_array() so that it works correctly with the shared_array
Christoph Lameter [Wed, 22 Mar 2006 08:09:07 +0000 (00:09 -0800)]
[PATCH] slab: fix drain_array() so that it works correctly with the shared_array

The list_lock also protects the shared array and we call drain_array() with
the shared array.  Therefore we cannot go as far as I wanted to but have to
take the lock in a way so that it also protects the array_cache in
drain_pages.

(Note: maybe we should make the array_cache locking more consistent?  I.e.
always take the array cache lock for shared arrays and disable interrupts
for the per cpu arrays?)

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] slab: remove drain_array_locked
Christoph Lameter [Wed, 22 Mar 2006 08:09:07 +0000 (00:09 -0800)]
[PATCH] slab: remove drain_array_locked

Remove drain_array_locked and use that opportunity to limit the time the l3
lock is taken further.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] slab: make drain_array more universal by adding more parameters
Christoph Lameter [Wed, 22 Mar 2006 08:09:06 +0000 (00:09 -0800)]
[PATCH] slab: make drain_array more universal by adding more parameters

And a parameter to drain_array to control the freeing of all objects and
then use drain_array() to replace instances of drain_array_locked with
drain_array.  Doing so will avoid taking locks in those locations if the
arrays are empty.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] slab: cache_reap(): further reduction in interrupt holdoff
Christoph Lameter [Wed, 22 Mar 2006 08:09:05 +0000 (00:09 -0800)]
[PATCH] slab: cache_reap(): further reduction in interrupt holdoff

cache_reap takes the l3->list_lock (disabling interrupts) unconditionally
and then does a few checks and maybe does some cleanup.  This patch makes
cache_reap() only take the lock if there is work to do and then the lock is
taken and released for each cleaning action.

The checking of when to do the next reaping is done without any locking and
becomes racy.  Should not matter since reaping can also be skipped if the
slab mutex cannot be acquired.

The same is true for the touched processing.  If we get this wrong once in
awhile then we will mistakenly clean or not clean the shared cache.  This
will impact performance slightly.

Note that the additional drain_array() function introduced here will fall
out in a subsequent patch since array cleaning will now be very similar
from all callers.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: make shrink_all_memory try harder
Rafael J. Wysocki [Wed, 22 Mar 2006 08:09:04 +0000 (00:09 -0800)]
[PATCH] mm: make shrink_all_memory try harder

Make shrink_all_memory() repeat the attempts to free more memory if there
seems to be no pages to free.

Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] optimize follow_hugetlb_page
Chen, Kenneth W [Wed, 22 Mar 2006 08:09:03 +0000 (00:09 -0800)]
[PATCH] optimize follow_hugetlb_page

follow_hugetlb_page() walks a range of user virtual address and then fills
in list of struct page * into an array that is passed from the argument
list.  It also gets a reference count via get_page().  For compound page,
get_page() actually traverse back to head page via page_private() macro and
then adds a reference count to the head page.  Since we are doing a virt to
pte look up, kernel already has a struct page pointer into the head page.
So instead of traverse into the small unit page struct and then follow a
link back to the head page, optimize that with incrementing the reference
count directly on the head page.

The benefit is that we don't take a cache miss on accessing page struct for
the corresponding user address and more importantly, not to pollute the
cache with a "not very useful" round trip of pointer chasing.  This adds a
moderate performance gain on an I/O intensive database transaction
workload.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] convert hugetlbfs_counter to atomic
Chen, Kenneth W [Wed, 22 Mar 2006 08:09:02 +0000 (00:09 -0800)]
[PATCH] convert hugetlbfs_counter to atomic

Implementation of hugetlbfs_counter() is functionally equivalent to
atomic_inc_return().  Use the simpler atomic form.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] hugepage: is_aligned_hugepage_range() cleanup
David Gibson [Wed, 22 Mar 2006 08:09:01 +0000 (00:09 -0800)]
[PATCH] hugepage: is_aligned_hugepage_range() cleanup

Quite a long time back, prepare_hugepage_range() replaced
is_aligned_hugepage_range() as the callback from mm/mmap.c to arch code to
verify if an address range is suitable for a hugepage mapping.
is_aligned_hugepage_range() stuck around, but only to implement
prepare_hugepage_range() on archs which didn't implement their own.

Most archs (everything except ia64 and powerpc) used the same
implementation of is_aligned_hugepage_range().  On powerpc, which
implements its own prepare_hugepage_range(), the custom version was never
used.

In addition, "is_aligned_hugepage_range()" was a bad name, because it
suggests it returns true iff the given range is a good hugepage range,
whereas in fact it returns 0-or-error (so the sense is reversed).

This patch cleans up by abolishing is_aligned_hugepage_range().  Instead
prepare_hugepage_range() is defined directly.  Most archs use the default
version, which simply checks the given region is aligned to the size of a
hugepage.  ia64 and powerpc define custom versions.  The ia64 one simply
checks that the range is in the correct address space region in addition to
being suitably aligned.  The powerpc version (just as previously) checks
for suitable addresses, and if necessary performs low-level MMU frobbing to
set up new areas for use by hugepages.

No libhugetlbfs testsuite regressions on ppc64 (POWER5 LPAR).

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: William Lee Irwin III <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] hugepage: Move hugetlb_free_pgd_range() prototype to hugetlb.h
David Gibson [Wed, 22 Mar 2006 08:08:59 +0000 (00:08 -0800)]
[PATCH] hugepage: Move hugetlb_free_pgd_range() prototype to hugetlb.h

The optional hugepage callback, hugetlb_free_pgd_range() is presently
implemented non-trivially only on ia64 (but I plan to add one for powerpc
shortly).  It has its own prototype for the function in asm-ia64/pgtable.h.
 However, since the function is called from generic code, it make sense for
its prototype to be in the generic hugetlb.h header file, as the protypes
other arch callbacks already are (prepare_hugepage_range(),
set_huge_pte_at(), etc.).  This patch makes it so.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] hugepage: Fix hugepage logic in free_pgtables() harder
David Gibson [Wed, 22 Mar 2006 08:08:58 +0000 (00:08 -0800)]
[PATCH] hugepage: Fix hugepage logic in free_pgtables() harder

Turns out the hugepage logic in free_pgtables() was doubly broken.  The
loop coalescing multiple normal page VMAs into one call to free_pgd_range()
had an off by one error, which could mean it would coalesce one hugepage
VMA into the same bundle (checking 'vma' not 'next' in the loop).  I
transferred this bug into the new is_vm_hugetlb_page() based version.
Here's the fix.

This one didn't bite on powerpc previously for the same reason the
is_hugepage_only_range() problem didn't: powerpc's hugetlb_free_pgd_range()
is identical to free_pgd_range().  It didn't bite on ia64 because the
hugepage region is distant enough from any other region that the separated
PMD_SIZE distance test would always prevent coalescing the two together.

No libhugetlbfs testsuite regressions (ppc64, POWER5).

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] hugepage: Fix hugepage logic in free_pgtables()
David Gibson [Wed, 22 Mar 2006 08:08:57 +0000 (00:08 -0800)]
[PATCH] hugepage: Fix hugepage logic in free_pgtables()

free_pgtables() has special logic to call hugetlb_free_pgd_range() instead
of the normal free_pgd_range() on hugepage VMAs.  However, the test it uses
to do so is incorrect: it calls is_hugepage_only_range on a hugepage sized
range at the start of the vma.  is_hugepage_only_range() will return true
if the given range has any intersection with a hugepage address region, and
in this case the given region need not be hugepage aligned.  So, for
example, this test can return true if called on, say, a 4k VMA immediately
preceding a (nicely aligned) hugepage VMA.

At present we get away with this because the powerpc version of
hugetlb_free_pgd_range() is just a call to free_pgd_range().  On ia64 (the
only other arch with a non-trivial is_hugepage_only_range()) we get away
with it for a different reason; the hugepage area is not contiguous with
the rest of the user address space, and VMAs are not permitted in between,
so the test can't return a false positive there.

Nonetheless this should be fixed.  We do that in the patch below by
replacing the is_hugepage_only_range() test with an explicit test of the
VMA using is_vm_hugetlb_page().

This in turn changes behaviour for platforms where is_hugepage_only_range()
returns false always (everything except powerpc and ia64).  We address this
by ensuring that hugetlb_free_pgd_range() is defined to be identical to
free_pgd_range() (instead of a no-op) on everything except ia64.  Even so,
it will prevent some otherwise possible coalescing of calls down to
free_pgd_range().  Since this only happens for hugepage VMAs, removing this
small optimization seems unlikely to cause any trouble.

This patch causes no regressions on the libhugetlbfs testsuite - ppc64
POWER5 (8-way), ppc64 G5 (2-way) and i386 Pentium M (UP).

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Acked-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] hugepage: Make {alloc,free}_huge_page() local
David Gibson [Wed, 22 Mar 2006 08:08:56 +0000 (00:08 -0800)]
[PATCH] hugepage: Make {alloc,free}_huge_page() local

Originally, mm/hugetlb.c just handled the hugepage physical allocation path
and its {alloc,free}_huge_page() functions were used from the arch specific
hugepage code.  These days those functions are only used with mm/hugetlb.c
itself.  Therefore, this patch makes them static and removes their
prototypes from hugetlb.h.  This requires a small rearrangement of code in
mm/hugetlb.c to avoid a forward declaration.

This patch causes no regressions on the libhugetlbfs testsuite (ppc64,
POWER5).

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] hugepage: Strict page reservation for hugepage inodes
David Gibson [Wed, 22 Mar 2006 08:08:55 +0000 (00:08 -0800)]
[PATCH] hugepage: Strict page reservation for hugepage inodes

These days, hugepages are demand-allocated at first fault time.  There's a
somewhat dubious (and racy) heuristic when making a new mmap() to check if
there are enough available hugepages to fully satisfy that mapping.

A particularly obvious case where the heuristic breaks down is where a
process maps its hugepages not as a single chunk, but as a bunch of
individually mmap()ed (or shmat()ed) blocks without touching and
instantiating the pages in between allocations.  In this case the size of
each block is compared against the total number of available hugepages.
It's thus easy for the process to become overcommitted, because each block
mapping will succeed, although the total number of hugepages required by
all blocks exceeds the number available.  In particular, this defeats such
a program which will detect a mapping failure and adjust its hugepage usage
downward accordingly.

The patch below addresses this problem, by strictly reserving a number of
physical hugepages for hugepage inodes which have been mapped, but not
instatiated.  MAP_SHARED mappings are thus "safe" - they will fail on
mmap(), not later with an OOM SIGKILL.  MAP_PRIVATE mappings can still
trigger an OOM.  (Actually SHARED mappings can technically still OOM, but
only if the sysadmin explicitly reduces the hugepage pool between mapping
and instantiation)

This patch appears to address the problem at hand - it allows DB2 to start
correctly, for instance, which previously suffered the failure described
above.

This patch causes no regressions on the libhugetblfs testsuite, and makes a
test (designed to catch this problem) pass which previously failed (ppc64,
POWER5).

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] hugepage: serialize hugepage allocation and instantiation
David Gibson [Wed, 22 Mar 2006 08:08:53 +0000 (00:08 -0800)]
[PATCH] hugepage: serialize hugepage allocation and instantiation

Currently, no lock or mutex is held between allocating a hugepage and
inserting it into the pagetables / page cache.  When we do go to insert the
page into pagetables or page cache, we recheck and may free the newly
allocated hugepage.  However, since the number of hugepages in the system
is strictly limited, and it's usualy to want to use all of them, this can
still lead to spurious allocation failures.

For example, suppose two processes are both mapping (MAP_SHARED) the same
hugepage file, large enough to consume the entire available hugepage pool.
If they race instantiating the last page in the mapping, they will both
attempt to allocate the last available hugepage.  One will fail, of course,
returning OOM from the fault and thus causing the process to be killed,
despite the fact that the entire mapping can, in fact, be instantiated.

The patch fixes this race by the simple method of adding a (sleeping) mutex
to serialize the hugepage fault path between allocation and insertion into
pagetables and/or page cache.  It would be possible to avoid the
serialization by catching the allocation failures, waiting on some
condition, then rechecking to see if someone else has instantiated the page
for us.  Given the likely frequency of hugepage instantiations, it seems
very doubtful it's worth the extra complexity.

This patch causes no regression on the libhugetlbfs testsuite, and one
test, which can trigger this race now passes where it previously failed.

Actually, the test still sometimes fails, though less often and only as a
shmat() failure, rather processes getting OOM killed by the VM.  The dodgy
heuristic tests in fs/hugetlbfs/inode.c for whether there's enough hugepage
space aren't protected by the new mutex, and would be ugly to do so, so
there's still a race there.  Another patch to replace those tests with
something saner for this reason as well as others coming...

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] hugepage: Small fixes to hugepage clear/copy path
David Gibson [Wed, 22 Mar 2006 08:08:51 +0000 (00:08 -0800)]
[PATCH] hugepage: Small fixes to hugepage clear/copy path

Move the loops used in mm/hugetlb.c to clear and copy hugepages to their
own functions for clarity.  As we do so, we add some checks of need_resched
- we are, after all copying megabytes of memory here.  We also add
might_sleep() accordingly.  We generally dropped locks around the clear and
copy, already but not everyone has PREEMPT enabled, so we should still be
checking explicitly.

For this to work, we need to remove the clear_huge_page() from
alloc_huge_page(), which is called with the page_table_lock held in the COW
path.  We move the clear_huge_page() to just after the alloc_huge_page() in
the hugepage no-page path.  In the COW path, the new page is about to be
copied over, so clearing it was just a waste of time anyway.  So as a side
effect we also fix the fact that we held the page_table_lock for far too
long in this path by calling alloc_huge_page() under it.

It causes no regressions on the libhugetlbfs testsuite (ppc64, POWER5).

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] Enable mprotect on huge pages
Zhang, Yanmin [Wed, 22 Mar 2006 08:08:50 +0000 (00:08 -0800)]
[PATCH] Enable mprotect on huge pages

2.6.16-rc3 uses hugetlb on-demand paging, but it doesn_t support hugetlb
mprotect.

From: David Gibson <david@gibson.dropbear.id.au>

  Remove a test from the mprotect() path which checks that the mprotect()ed
  range on a hugepage VMA is hugepage aligned (yes, really, the sense of
  is_aligned_hugepage_range() is the opposite of what you'd guess :-/).

  In fact, we don't need this test.  If the given addresses match the
  beginning/end of a hugepage VMA they must already be suitably aligned.  If
  they don't, then mprotect_fixup() will attempt to split the VMA.  The very
  first test in split_vma() will check for a badly aligned address on a
  hugepage VMA and return -EINVAL if necessary.

From: "Chen, Kenneth W" <kenneth.w.chen@intel.com>

  On i386 and x86-64, pte flag _PAGE_PSE collides with _PAGE_PROTNONE.  The
  identify of hugetlb pte is lost when changing page protection via mprotect.
  A page fault occurs later will trigger a bug check in huge_pte_alloc().

  The fix is to always make new pte a hugetlb pte and also to clean up
  legacy code where _PAGE_PRESENT is forced on in the pre-faulting day.

Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: William Lee Irwin III <wli@holomorphy.com>
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Cc: Andi Kleen <ak@muc.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] readahead: fix initial window size calculation
Steven Pratt [Wed, 22 Mar 2006 08:08:48 +0000 (00:08 -0800)]
[PATCH] readahead: fix initial window size calculation

The current current get_init_ra_size is not optimal across different IO
sizes and max_readahead values.  Here is a quick summary of sizes computed
under current design and under the attached patch.  All of these assume 1st
IO at offset 0, or 1st detected sequential IO.

32k max, 4k request

old         new
-----------------
 8k        8k
16k       16k
32k       32k

128k max, 4k request
old         new
-----------------
32k         16k
64k         32k
128k        64k
128k       128k

128k max, 32k request
old         new
-----------------
32k         64k    <-----
64k        128k
128k       128k

512k max, 4k request
old         new
-----------------
4k         32k     <----
16k        64k
64k       128k
128k      256k
512k      512k

Cc: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Steven Pratt <slpratt@austin.ibm.com>
Cc: Ram Pai <linuxram@us.ibm.com>
Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] readahead: ->prev_page can overrun the ahead window
Oleg Nesterov [Wed, 22 Mar 2006 08:08:47 +0000 (00:08 -0800)]
[PATCH] readahead: ->prev_page can overrun the ahead window

If get_next_ra_size() does not grow fast enough, ->prev_page can overrun
the ahead window.  This means the caller will read the pages from
->ahead_start + ->ahead_size to ->prev_page synchronously.

Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Steven Pratt <slpratt@austin.ibm.com>
Cc: Ram Pai <linuxram@us.ibm.com>
Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] shmem: inline to avoid warning
Hugh Dickins [Wed, 22 Mar 2006 08:08:46 +0000 (00:08 -0800)]
[PATCH] shmem: inline to avoid warning

shmem.c was named and shamed in Jesper's "Building 100 kernels" warnings:
shmem_parse_mpol is only used when CONFIG_TMPFS parses mount options; and
only called from that one site, so mark it inline like its non-NUMA stub.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] vmscan: emove obsolete checks from shrink_list() and fix unlikely in refill_i...
Christoph Lameter [Wed, 22 Mar 2006 08:08:45 +0000 (00:08 -0800)]
[PATCH] vmscan: emove obsolete checks from shrink_list() and fix unlikely in refill_inactive_zone()

As suggested by Marcelo:

1. The optimization introduced recently for not calling
   page_referenced() during zone reclaim makes two additional checks in
   shrink_list unnecessary.

2. The if (unlikely(sc->may_swap)) in refill_inactive_zone is optimized
   for the zone_reclaim case.  However, most peoples system only does swap.
   Undo that.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: Marcelo Tosatti <marcelo.tosatti@cyclades.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] Uninline sys_mmap common code (reduce binary size)
Michael Buesch [Wed, 22 Mar 2006 08:08:44 +0000 (00:08 -0800)]
[PATCH] Uninline sys_mmap common code (reduce binary size)

Remove the inlining of the new vs old mmap system call common code.  This
reduces the size of the resulting vmlinux for defconfig as follows:

mb@pc1:~/develop/git/linux-2.6$ size vmlinux.mmap*
   text    data     bss     dec     hex filename
3303749  521524  186564 4011837  3d373d vmlinux.mmapinline
3303557  521524  186564 4011645  3d367d vmlinux.mmapnoinline

The new sys_mmap2() has also one function call overhead removed, now.
(probably it was already optimized to a jmp before, but anyway...)

Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: optimise page_count
Nick Piggin [Wed, 22 Mar 2006 08:08:43 +0000 (00:08 -0800)]
[PATCH] mm: optimise page_count

Optimise page_count compound page test and make it consistent with similar
functions.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: more CONFIG_DEBUG_VM
Nick Piggin [Wed, 22 Mar 2006 08:08:42 +0000 (00:08 -0800)]
[PATCH] mm: more CONFIG_DEBUG_VM

Put a few more checks under CONFIG_DEBUG_VM

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: prep_zero_page() in irq is a bug
Andrew Morton [Wed, 22 Mar 2006 08:08:42 +0000 (00:08 -0800)]
[PATCH] mm: prep_zero_page() in irq is a bug

prep_zero_page() uses KM_USER0 and hence may not be used from IRQ context, at
least for highmem pages.

Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Christoph Lameter <christoph@lameter.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: cleanup prep_ stuff
Nick Piggin [Wed, 22 Mar 2006 08:08:41 +0000 (00:08 -0800)]
[PATCH] mm: cleanup prep_ stuff

Move the prep_ stuff into prep_new_page.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] remove set_page_count() outside mm/
Nick Piggin [Wed, 22 Mar 2006 08:08:40 +0000 (00:08 -0800)]
[PATCH] remove set_page_count() outside mm/

set_page_count usage outside mm/ is limited to setting the refcount to 1.
Remove set_page_count from outside mm/, and replace those users with
init_page_count() and set_page_refcounted().

This allows more debug checking, and tighter control on how code is allowed
to play around with page->_count.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] remove set_page_count(page, 0) users (outside mm)
Nick Piggin [Wed, 22 Mar 2006 08:08:35 +0000 (00:08 -0800)]
[PATCH] remove set_page_count(page, 0) users (outside mm)

A couple of places set_page_count(page, 1) that don't need to.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: nommu use compound pages
Nick Piggin [Wed, 22 Mar 2006 08:08:34 +0000 (00:08 -0800)]
[PATCH] mm: nommu use compound pages

Now that compound page handling is properly fixed in the VM, move nommu
over to using compound pages rather than rolling their own refcounting.

nommu vm page refcounting is broken anyway, but there is no need to have
divergent code in the core VM now, nor when it gets fixed.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: David Howells <dhowells@redhat.com>
(Needs testing, please).
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: make __put_page internal
Nick Piggin [Wed, 22 Mar 2006 08:08:33 +0000 (00:08 -0800)]
[PATCH] mm: make __put_page internal

Remove __put_page from outside the core mm/.  It is dangerous because it does
not handle compound pages nicely, and misses 1->0 transitions.  If a user
later appears that really needs the extra speed we can reevaluate.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] x86_64: pageattr remove __put_page
Nick Piggin [Wed, 22 Mar 2006 08:08:33 +0000 (00:08 -0800)]
[PATCH] x86_64: pageattr remove __put_page

Remove page_count and __put_page from x86-64 pageattr

Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] x86_64: pageattr use single list
Nick Piggin [Wed, 22 Mar 2006 08:08:32 +0000 (00:08 -0800)]
[PATCH] x86_64: pageattr use single list

Use page->lru.next to implement the singly linked list of pages rather than
the struct deferred_page which needs to be allocated and freed for each
page.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] i386: pageattr remove __put_page
Nick Piggin [Wed, 22 Mar 2006 08:08:31 +0000 (00:08 -0800)]
[PATCH] i386: pageattr remove __put_page

Stop using __put_page and page_count in i386 pageattr.c

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] sg: use compound pages
Nick Piggin [Wed, 22 Mar 2006 08:08:30 +0000 (00:08 -0800)]
[PATCH] sg: use compound pages

sg increments the refcount of constituent pages in its higher order memory
allocations when they are about to be mapped by userspace.  This is done so
the subsequent get_page/put_page when doing the mapping and unmapping does not
free the page.

Move over to the preferred way, that is, using compound pages instead.  This
fixes a whole class of possible obscure bugs where a get_user_pages on a
constituent page may outlast the user mappings or even the driver.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Douglas Gilbert <dougg@torque.net>
Cc: James Bottomley <James.Bottomley@steeleye.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] remove VM_DONTCOPY bogosities
Hugh Dickins [Wed, 22 Mar 2006 08:08:29 +0000 (00:08 -0800)]
[PATCH] remove VM_DONTCOPY bogosities

Now that it's madvisable, remove two pieces of VM_DONTCOPY bogosity:

1. There was and is no logical reason why VM_DONTCOPY should be in the
   list of flags which forbid vma merging (and those drivers which set
   it are also setting VM_IO, which itself forbids the merge).

2. It's hard to understand the purpose of the VM_HUGETLB, VM_DONTCOPY
   block in vm_stat_account: but never mind, it's under CONFIG_HUGETLB,
   which (unlike CONFIG_HUGETLB_PAGE or CONFIG_HUGETLBFS) has never been
   defined.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: shrink_inactive_lis() nr_scan accounting fix
Wu Fengguang [Wed, 22 Mar 2006 08:08:28 +0000 (00:08 -0800)]
[PATCH] mm: shrink_inactive_lis() nr_scan accounting fix

In shrink_inactive_list(), nr_scan is not accounted when nr_taken is 0.
But 0 pages taken does not mean 0 pages scanned.

Move the goto statement below the accounting code to fix it.

Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: isolate_lru_pages() scan count fix
Wu Fengguang [Wed, 22 Mar 2006 08:08:23 +0000 (00:08 -0800)]
[PATCH] mm: isolate_lru_pages() scan count fix

In isolate_lru_pages(), *scanned reports one more scan because the scan
counter is increased one more time on exit of the while-loop.

Change the while-loop to for-loop to fix it.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] zone_reclaim: additional comments and cleanup
Christoph Lameter [Wed, 22 Mar 2006 08:08:22 +0000 (00:08 -0800)]
[PATCH] zone_reclaim: additional comments and cleanup

Add some comments to explain how zone reclaim works.  And it fixes the
following issues:

- PF_SWAPWRITE needs to be set for RECLAIM_SWAP to be able to write
  out pages to swap. Currently RECLAIM_SWAP may not do that.

- remove setting nr_reclaimed pages after slab reclaim since the slab shrinking
  code does not use that and the nr_reclaimed pages is just right for the
  intended follow up action.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] vmscan: rename functions
Andrew Morton [Wed, 22 Mar 2006 08:08:21 +0000 (00:08 -0800)]
[PATCH] vmscan: rename functions

We have:

try_to_free_pages
->shrink_caches(struct zone **zones, ..)
  ->shrink_zone(struct zone *, ...)
    ->shrink_cache(struct zone *, ...)
      ->shrink_list(struct list_head *, ...)
    ->refill_inactive_list((struct zone *, ...)

which is fairly irrational.

Rename things so that we have

  try_to_free_pages
  ->shrink_zones(struct zone **zones, ..)
    ->shrink_zone(struct zone *, ...)
      ->shrink_inactive_list(struct zone *, ...)
        ->shrink_page_list(struct list_head *, ...)
    ->shrink_active_list(struct zone *, ...)

Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Christoph Lameter <christoph@lameter.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] vmscan return nr_reclaimed
Andrew Morton [Wed, 22 Mar 2006 08:08:20 +0000 (00:08 -0800)]
[PATCH] vmscan return nr_reclaimed

Change all the vmscan functions to retunr the number-of-reclaimed pages and
remove scan_conrtol.nr_reclaimed.

Saves ten-odd bytes of text and makes things clearer and more consistent.

The patch also changes the behaviour of zone_reclaim() when it falls back to slab shrinking.  Christoph says

  "Setting this to one means that we will rescan and shrink the slab for
  each allocation if we are out of zone memory and RECLAIM_SLAB is set.  Plus
  if we do an order 0 allocation we do not go off node as intended.

  "We better set this to zero.  This means the allocation will go offnode
  despite us having potentially freed lots of memory on the zone.  Future
  allocations can then again be done from this zone."

Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Christoph Lameter <christoph@lameter.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] vmscan: use unsigned longs
Andrew Morton [Wed, 22 Mar 2006 08:08:19 +0000 (00:08 -0800)]
[PATCH] vmscan: use unsigned longs

Turn basically everything in vmscan.c into `unsigned long'.  This is to avoid
the possibility that some piece of code in there might decide to operate upon
more than 4G (or even 2G) of pages in one hit.

This might be silly, but we'll need it one day.

Cc: Christoph Lameter <clameter@sgi.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] vmscan: scan_control cleanup
Andrew Morton [Wed, 22 Mar 2006 08:08:18 +0000 (00:08 -0800)]
[PATCH] vmscan: scan_control cleanup

Initialise as much of scan_control as possible at the declaration site.  This
tidies things up a bit and assures us that all unmentioned fields are zeroed
out.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] Thin out scan_control: remove nr_to_scan and priority
Christoph Lameter [Wed, 22 Mar 2006 08:08:18 +0000 (00:08 -0800)]
[PATCH] Thin out scan_control: remove nr_to_scan and priority

Make nr_to_scan and priority a parameter instead of putting it into scan
control.  This allows various small optimizations and IMHO makes the code
easier to read.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] slab: use on_each_cpu()
Andrew Morton [Wed, 22 Mar 2006 08:08:17 +0000 (00:08 -0800)]
[PATCH] slab: use on_each_cpu()

Slab duplicates on_each_cpu().

Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] on_each_cpu(): disable local interrupts
Andrew Morton [Wed, 22 Mar 2006 08:08:16 +0000 (00:08 -0800)]
[PATCH] on_each_cpu(): disable local interrupts

When on_each_cpu() runs the callback on other CPUs, it runs with local
interrupts disabled.  So we should run the function with local interrupts
disabled on this CPU, too.

And do the same for UP, so the callback is run in the same environment on both
UP and SMP.  (strictly it should do preempt_disable() too, but I think
local_irq_disable is sufficiently equivalent).

Also uninlines on_each_cpu().  softirq.c was the most appropriate file I could
find, but it doesn't seem to justify creating a new file.

Oh, and fix up that comment over (under?) x86's smp_call_function().  It
drives me nuts.

Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] slab: Remove SLAB_NO_REAP option
Christoph Lameter [Wed, 22 Mar 2006 08:08:15 +0000 (00:08 -0800)]
[PATCH] slab: Remove SLAB_NO_REAP option

SLAB_NO_REAP is documented as an option that will cause this slab not to be
reaped under memory pressure.  However, that is not what happens.  The only
thing that SLAB_NO_REAP controls at the moment is the reclaim of the unused
slab elements that were allocated in batch in cache_reap().  Cache_reap()
is run every few seconds independently of memory pressure.

Could we remove the whole thing?  Its only used by three slabs anyways and
I cannot find a reason for having this option.

There is an additional problem with SLAB_NO_REAP.  If set then the recovery
of objects from alien caches is switched off.  Objects not freed on the
same node where they were initially allocated will only be reused if a
certain amount of objects accumulates from one alien node (not very likely)
or if the cache is explicitly shrunk.  (Strangely __cache_shrink does not
check for SLAB_NO_REAP)

Getting rid of SLAB_NO_REAP fixes the problems with alien cache freeing.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Mark Fasheh <mark.fasheh@oracle.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] slab: fix kernel-doc warnings
Randy Dunlap [Wed, 22 Mar 2006 08:08:14 +0000 (00:08 -0800)]
[PATCH] slab: fix kernel-doc warnings

Fix kernel-doc warnings in mm/slab.c.

Signed-off-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: kill kmem_cache_t usage
Pekka Enberg [Wed, 22 Mar 2006 08:08:13 +0000 (00:08 -0800)]
[PATCH] mm: kill kmem_cache_t usage

We have struct kmem_cache now so use it instead of the old typedef.

Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] slab: remove cachep->spinlock
Ravikiran G Thirumalai [Wed, 22 Mar 2006 08:08:12 +0000 (00:08 -0800)]
[PATCH] slab: remove cachep->spinlock

Remove cachep->spinlock.  Locking has moved to the kmem_list3 and most of
the structures protected earlier by cachep->spinlock is now protected by
the l3->list_lock.  slab cache tunables like batchcount are accessed always
with the cache_chain_mutex held.

Patch tested on SMP and NUMA kernels with dbench processes running,
constant onlining/offlining, and constant cache tuning, all at the same
time.

Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Cc: Christoph Lameter <christoph@lameter.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] slab cleanup
Andrew Morton [Wed, 22 Mar 2006 08:08:11 +0000 (00:08 -0800)]
[PATCH] slab cleanup

slab.c has become a bit revolting again.  Try to repair it.

- Coding style fixes

- Don't do assignments-in-if-statements.

- Don't typecast assignments to/from void*

Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] slab: extract setup_cpu_cache
Pekka Enberg [Wed, 22 Mar 2006 08:08:11 +0000 (00:08 -0800)]
[PATCH] slab: extract setup_cpu_cache

Extract setup_cpu_cache() function from kmem_cache_create() to make the
latter a little less complex.

Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] slab: object to index mapping cleanup
Pekka Enberg [Wed, 22 Mar 2006 08:08:10 +0000 (00:08 -0800)]
[PATCH] slab: object to index mapping cleanup

Clean up the object to index mapping that has been spread around mm/slab.c.

Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] kcalloc(): INT_MAX -> ULONG_MAX
Adrian Bunk [Wed, 22 Mar 2006 08:08:09 +0000 (00:08 -0800)]
[PATCH] kcalloc(): INT_MAX -> ULONG_MAX

Since size_t has the same size as a long on all architectures, it's enough
for overflow checks to check against ULONG_MAX.

This change could allow a compiler better optimization (especially in the
n=1 case).

The practical effect seems to be positive, but quite small:

    text           data     bss      dec            hex filename
21762380        5859870 1848928 29471178        1c1b1ca vmlinux-old
21762211        5859870 1848928 29471009        1c1b121 vmlinux-patched

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] hugepage allocator cleanup
Nick Piggin [Wed, 22 Mar 2006 08:08:08 +0000 (00:08 -0800)]
[PATCH] hugepage allocator cleanup

Insert "fresh" huge pages into the hugepage allocator by the same means as
they are freed back into it.  This reduces code size and allows
enqueue_huge_page to be inlined into the hugepage free fastpath.

Eliminate occurances of hugepages on the free list with non-zero refcount.
This can allow stricter refcount checks in future.  Also required for
lockless pagecache.

Signed-off-by: Nick Piggin <npiggin@suse.de>
"This patch also eliminates a leak "cleaned up" by re-clobbering the
refcount on every allocation from the hugepage freelists.  With respect to
the lockless pagecache, the crucial aspect is to eliminate unconditional
set_page_count() to 0 on pages with potentially nonzero refcounts, though
closer inspection suggests the assignments removed are entirely spurious."

Acked-by: William Irwin <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: cleanup bootmem
Nick Piggin [Wed, 22 Mar 2006 08:08:07 +0000 (00:08 -0800)]
[PATCH] mm: cleanup bootmem

The bootmem code added to page_alloc.c duplicated some page freeing code
that it really doesn't need to because it is not so performance critical.

While we're here, make prefetching work properly by actually prefetching
the page we're about to use before prefetching ahead to the next one (ie.
get the most important transaction started first).  Also prefetch just a
single page ahead rather than leaving a gap of 16.

Jack Steiner reported no problems with SGI's ia64 simulator.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: page_state comment more
Nick Piggin [Wed, 22 Mar 2006 08:08:06 +0000 (00:08 -0800)]
[PATCH] mm: page_state comment more

Clarify that preemption needs to be guarded against with the
__xxx_page_state functions.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: split highorder pages
Nick Piggin [Wed, 22 Mar 2006 08:08:05 +0000 (00:08 -0800)]
[PATCH] mm: split highorder pages

Have an explicit mm call to split higher order pages into individual pages.
 Should help to avoid bugs and be more explicit about the code's intention.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: David Howells <dhowells@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Chris Zankel <chris@zankel.net>
Signed-off-by: Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] xtensa: pgtable fixes
Nick Piggin [Wed, 22 Mar 2006 08:08:04 +0000 (00:08 -0800)]
[PATCH] xtensa: pgtable fixes

- Don't return uninitialised stack values in case of allocation failure

- Don't bother clearing PageCompound because __GFP_COMP wasn't specified
  Increment over the pte page rather than one pte entry in
  pte_alloc_one_kernel

- Actually increment the page pointer in pte_alloc_one

- Compile fixes, typos.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Chris Zankel <chris@zankel.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: de-skew page refcounting
Nick Piggin [Wed, 22 Mar 2006 08:08:03 +0000 (00:08 -0800)]
[PATCH] mm: de-skew page refcounting

atomic_add_unless (atomic_inc_not_zero) no longer requires an offset refcount
to function correctly.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: simplify vmscan vs release refcounting
Nick Piggin [Wed, 22 Mar 2006 08:08:03 +0000 (00:08 -0800)]
[PATCH] mm: simplify vmscan vs release refcounting

The VM has an interesting race where a page refcount can drop to zero, but it
is still on the LRU lists for a short time.  This was solved by testing a 0->1
refcount transition when picking up pages from the LRU, and dropping the
refcount in that case.

Instead, use atomic_add_unless to ensure we never pick up a 0 refcount page
from the LRU, thus a 0 refcount page will never have its refcount elevated
until it is allocated again.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: slab less atomics
Nick Piggin [Wed, 22 Mar 2006 08:08:02 +0000 (00:08 -0800)]
[PATCH] mm: slab less atomics

Atomic operation removal from slab

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
18 years ago[PATCH] mm: page_alloc less atomics
Nick Piggin [Wed, 22 Mar 2006 08:08:01 +0000 (00:08 -0800)]
[PATCH] mm: page_alloc less atomics

More atomic operation removal from page allocator

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>