* 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-dev:
libata: fix ATAPI draining
libata: update atapi_eh_request_sense() such that lbam/lbah contains buffer size
libata-acpi: implement _GTF command filtering
libata-acpi: improve _GTF execution error handling and reporting
libata-acpi: improve ACPI disabling
libata-acpi: implement dev->gtf_cache and evaluate _GTF right after _STM during resume
libata-acpi: implement and use ata_acpi_init_gtm()
libata-acpi: add new hooks ata_acpi_dissociate() and ata_acpi_on_disable()
libata: ata_dev_disable() should be called from EH context
libata: add more opcodes to ata.h
libata: update ata_*_printk() macros such that level can be a variable
libata-acpi: adjust constness in ata_acpi_gtm/stm() parameters
sata_mv: improve warnings about Highpoint RocketRAID 23xx cards
libata: add ST3160023AS / 3.42 to NCQ blacklist
libata: clear link->eh_info.serror from ata_std_postreset()
sata_sil: fix spurious IRQ handling
See http://linux-net.osdl.org/index.php/Bridge for general information
on how to get bridging working.
-- You can also create an inter-guest network using
- "--sharenet=<filename>": any two guests using the same file are on
- the same network. This file is created if it does not exist.
-
There is a helpful mailing list at http://ozlabs.org/mailman/listinfo/lguest
Good luck!
- oom_kill_allocating_task
- mmap_min_address
- numa_zonelist_order
+- nr_hugepages
+- nr_overcommit_hugepages
==============================================================
Otherwise, "zone" order will be selected. Default order is recommended unless
this is causing problems for your system/application.
+
+==============================================================
+
+nr_hugepages
+
+Change the minimum size of the hugepage pool.
+
+See Documentation/vm/hugetlbpage.txt
+
+==============================================================
+
+nr_overcommit_hugepages
+
+Change the maximum size of the hugepage pool. The maximum is
+nr_hugepages + nr_overcommit_hugepages.
+
+See Documentation/vm/hugetlbpage.txt
The output of "cat /proc/meminfo" will have lines like:
.....
-HugePages_Total: xxx
-HugePages_Free: yyy
-HugePages_Rsvd: www
+HugePages_Total: vvv
+HugePages_Free: www
+HugePages_Rsvd: xxx
+HugePages_Surp: yyy
Hugepagesize: zzz kB
where:
HugePages_Rsvd is short for "reserved," and is the number of hugepages
for which a commitment to allocate from the pool has been made, but no
allocation has yet been made. It's vaguely analogous to overcommit.
+HugePages_Surp is short for "surplus," and is the number of hugepages in
+the pool above the value in /proc/sys/vm/nr_hugepages. The maximum
+number of surplus hugepages is controlled by
+/proc/sys/vm/nr_overcommit_hugepages.
/proc/filesystems should also show a filesystem of type "hugetlbfs" configured
in the kernel.
memory that is preset in system at this time. System administrators may want
to put this command in one of the local rc init files. This will enable the
kernel to request huge pages early in the boot process (when the possibility
-of getting physical contiguous pages is still very high).
+of getting physical contiguous pages is still very high). In either
+case, adminstrators will want to verify the number of hugepages actually
+allocated by checking the sysctl or meminfo.
+
+/proc/sys/vm/nr_overcommit_hugepages indicates how large the pool of
+hugepages can grow, if more hugepages than /proc/sys/vm/nr_hugepages are
+requested by applications. echo'ing any non-zero value into this file
+indicates that the hugetlb subsystem is allowed to try to obtain
+hugepages from the buddy allocator, if the normal pool is exhausted. As
+these surplus hugepages go out of use, they are freed back to the buddy
+allocator.
+
+Caveat: Shrinking the pool via nr_hugepages while a surplus is in effect
+will allow the number of surplus huge pages to exceed the overcommit
+value, as the pool hugepages (which must have been in use for a surplus
+hugepages to be allocated) will become surplus hugepages. As long as
+this condition holds, however, no more surplus huge pages will be
+allowed on the system until one of the two sysctls are increased
+sufficiently, or the surplus huge pages go out of use and are freed.
If the user applications are going to request hugepages using mmap system
call, then it is required that system administrator mount a file system of
options, you can use [G|g]/[M|m]/[K|k] to represent giga/mega/kilo. For
example, size=2K has the same meaning as size=2048.
-read and write system calls are not supported on files that reside on hugetlb
-file systems.
+While read system calls are supported on files that reside on hugetlb
+file systems, write system calls are not.
Regular chown, chgrp, and chmod commands (with right permissions) could be
used to change the file attributes on hugetlbfs.
CHECKFLAGS += -D__alpha__ -m64
cflags-y := -pipe -mno-fp-regs -ffixed-8 -msmall-data
-cpuflags-$(CONFIG_ALPHA_EV67) := -mcpu=ev67
-cpuflags-$(CONFIG_ALPHA_EV6) := -mcpu=ev6
+cpuflags-$(CONFIG_ALPHA_EV4) := -mcpu=ev4
+cpuflags-$(CONFIG_ALPHA_EV5) := -mcpu=ev5
+cpuflags-$(CONFIG_ALPHA_EV56) := -mcpu=ev56
cpuflags-$(CONFIG_ALPHA_POLARIS) := -mcpu=pca56
cpuflags-$(CONFIG_ALPHA_SX164) := -mcpu=pca56
-cpuflags-$(CONFIG_ALPHA_EV56) := -mcpu=ev56
-cpuflags-$(CONFIG_ALPHA_EV5) := -mcpu=ev5
-cpuflags-$(CONFIG_ALPHA_EV4) := -mcpu=ev4
+cpuflags-$(CONFIG_ALPHA_EV6) := -mcpu=ev6
+cpuflags-$(CONFIG_ALPHA_EV67) := -mcpu=ev67
# If GENERIC, make sure to turn off any instruction set extensions that
# the host compiler might have on by default. Given that EV4 and EV5
# have the same instruction set, prefer EV5 because an EV5 schedule is
struct el_subpacket_handler ev7_pal_subpacket_handler =
SUBPACKET_HANDLER_INIT(EL_CLASS__PAL, ev7_process_pal_subpacket);
-void
+void __init
ev7_register_error_handlers(void)
{
int i;
mb();
}
-void
+void __init
marvel_register_error_handlers(void)
{
ev7_register_error_handlers();
SUBPACKET_HANDLER_INIT(EL_CLASS__REGATTA_FAMILY,
el_process_regatta_subpacket);
-void
+void __init
titan_register_error_handlers(void)
{
size_t i;
#define __initmv __initdata
#define ALIAS_MV(x)
#else
-#define __initmv
+#define __initmv __initdata_refok
/* GCC actually has a syntax for defining aliases, but is under some
delusion that you shouldn't be able to declare it extern somewhere
extql t2, a1, t2 # U :
cmpbge zero, t1, t8 # E : is there a zero?
- andnot t2, t6, t12 # E : dest mask for a single word copy
+ andnot t2, t6, t2 # E : dest mask for a single word copy
or t8, t10, t5 # E : test for end-of-count too
- cmpbge zero, t12, t3 # E :
+ cmpbge zero, t2, t3 # E :
cmoveq a2, t5, t8 # E : Latency=2, extra map slot
nop # E : keep with cmoveq
andnot t8, t3, t8 # E : (stall)
negq t8, t6 # E : build bitmask of bytes <= zero
mskqh t1, t4, t1 # U :
- and t6, t8, t2 # E :
- subq t2, 1, t6 # E : (stall)
- or t6, t2, t8 # E : (stall)
- zapnot t12, t8, t12 # U : prepare source word; mirror changes (stall)
+ and t6, t8, t12 # E :
+ subq t12, 1, t6 # E : (stall)
+ or t6, t12, t8 # E : (stall)
+ zapnot t2, t8, t2 # U : prepare source word; mirror changes (stall)
zapnot t1, t8, t1 # U : to source validity mask
- andnot t0, t12, t0 # E : zero place for source to reside
+ andnot t0, t2, t0 # E : zero place for source to reside
or t0, t1, t0 # E : and put it there (stall both t0, t1)
stq_u t0, 0(a0) # L : (stall)
or $3, $24, $3 # clear the bits between the last
or $4, $27, $4 # written byte and the last byte in COUNT
- andnot $4, $3, $4
+ andnot $3, $4, $4
zap $1, $4, $1
stq_u $1, 0($16)
extql t2, a1, t2 # e0 :
cmpbge zero, t1, t8 # .. e1 : is there a zero?
- andnot t2, t6, t12 # e0 : dest mask for a single word copy
+ andnot t2, t6, t2 # e0 : dest mask for a single word copy
or t8, t10, t5 # .. e1 : test for end-of-count too
- cmpbge zero, t12, t3 # e0 :
+ cmpbge zero, t2, t3 # e0 :
cmoveq a2, t5, t8 # .. e1 :
andnot t8, t3, t8 # e0 :
beq t8, $u_head # .. e1 (zdb)
ldq_u t0, 0(a0) # e0 :
negq t8, t6 # .. e1 : build bitmask of bytes <= zero
mskqh t1, t4, t1 # e0 :
- and t6, t8, t2 # .. e1 :
- subq t2, 1, t6 # e0 :
- or t6, t2, t8 # e1 :
+ and t6, t8, t12 # .. e1 :
+ subq t12, 1, t6 # e0 :
+ or t6, t12, t8 # e1 :
- zapnot t12, t8, t12 # e0 : prepare source word; mirror changes
+ zapnot t2, t8, t2 # e0 : prepare source word; mirror changes
zapnot t1, t8, t1 # .. e1 : to source validity mask
- andnot t0, t12, t0 # e0 : zero place for source to reside
+ andnot t0, t2, t0 # e0 : zero place for source to reside
or t0, t1, t0 # e1 : and put it there
stq_u t0, 0(a0) # e0 :
ret (t9) # .. e1 :
close(fds[1]);
if (pid > 0)
- CATCH_EINTR(err = waitpid(pid, NULL, 0));
+ helper_wait(pid, 0, "change_tramp");
return pid;
}
{
struct slip_pre_exec_data pe_data;
char *output;
- int status, pid, fds[2], err, output_len;
+ int pid, fds[2], err, output_len;
err = os_pipe(fds, 1, 0);
if (err < 0) {
read_output(fds[0], output, output_len);
printk("%s", output);
- CATCH_EINTR(err = waitpid(pid, &status, 0));
- if (err < 0)
- err = errno;
- else if (!WIFEXITED(status) || (WEXITSTATUS(status) != 0)) {
- printk(UM_KERN_ERR "'%s' didn't exit with status 0\n", argv[0]);
- err = -EINVAL;
- }
- else err = 0;
-
+ err = helper_wait(pid, 0, argv[0]);
close(fds[0]);
out_free:
static void slirp_close(int fd, void *data)
{
struct slirp_data *pri = data;
- int status,err;
+ int err;
close(fd);
close(pri->slave);
"(%d)\n", pri->pid, errno);
}
#endif
-
- CATCH_EINTR(err = waitpid(pri->pid, &status, WNOHANG));
- if (err < 0) {
- printk(UM_KERN_ERR "slirp_close: waitpid returned %d\n", errno);
- return;
- }
-
- if (err == 0) {
- printk(UM_KERN_ERR "slirp_close: process %d has not exited\n",
- pri->pid);
+ err = helper_wait(pri->pid, 1, "slirp_close");
+ if (err < 0)
return;
- }
pri->pid = -1;
}
goto out_close;
}
- pid = clone(io_thread, (void *) sp, CLONE_FILES | CLONE_VM | SIGCHLD,
- NULL);
+ pid = clone(io_thread, (void *) sp, CLONE_FILES | CLONE_VM, NULL);
if(pid < 0){
err = -errno;
printk("start_io_thread - clone failed : errno = %d\n", errno);
extern int run_helper(void (*pre_exec)(void *), void *pre_data, char **argv);
extern int run_helper_thread(int (*proc)(void *), void *arg,
unsigned int flags, unsigned long *stack_out);
-extern int helper_wait(int pid);
+extern int helper_wait(int pid, int nohang, char *pname);
/* tls.c */
goto out_close_pipe;
err = run_helper_thread(not_aio_thread, NULL,
- CLONE_FILES | CLONE_VM | SIGCHLD, &aio_stack);
+ CLONE_FILES | CLONE_VM, &aio_stack);
if (err < 0)
goto out_close_pipe;
}
err = run_helper_thread(aio_thread, NULL,
- CLONE_FILES | CLONE_VM | SIGCHLD, &aio_stack);
+ CLONE_FILES | CLONE_VM, &aio_stack);
if (err < 0)
return err;
int control_remote, int data_me, int data_remote)
{
struct etap_pre_exec_data pe_data;
- int pid, status, err, n;
+ int pid, err, n;
char version_buf[sizeof("nnnnn\0")];
char data_fd_buf[sizeof("nnnnnn\0")];
char gate_buf[sizeof("nnn.nnn.nnn.nnn\0")];
}
if (c != 1) {
printk(UM_KERN_ERR "etap_tramp : uml_net failed\n");
- err = -EINVAL;
- CATCH_EINTR(n = waitpid(pid, &status, 0));
- if (n < 0)
- err = -errno;
- else if (!WIFEXITED(status) || (WEXITSTATUS(status) != 1))
- printk(UM_KERN_ERR "uml_net didn't exit with "
- "status 1\n");
+ err = helper_wait(pid, 0, "uml_net");
}
return err;
}
"errno = %d\n", errno);
return err;
}
- CATCH_EINTR(waitpid(pid, NULL, 0));
+ helper_wait(pid, 0, "tuntap_open_tramp");
cmsg = CMSG_FIRSTHDR(&msg);
if (cmsg == NULL) {
data.fd = fds[1];
data.buf = __cant_sleep() ? kmalloc(PATH_MAX, UM_GFP_ATOMIC) :
kmalloc(PATH_MAX, UM_GFP_KERNEL);
- pid = clone(helper_child, (void *) sp, CLONE_VM | SIGCHLD, &data);
+ pid = clone(helper_child, (void *) sp, CLONE_VM, &data);
if (pid < 0) {
ret = -errno;
printk("run_helper : clone failed, errno = %d\n", errno);
ret = n;
kill(pid, SIGKILL);
}
- CATCH_EINTR(waitpid(pid, NULL, 0));
+ CATCH_EINTR(waitpid(pid, NULL, __WCLONE));
}
out_free2:
return -ENOMEM;
sp = stack + UM_KERN_PAGE_SIZE - sizeof(void *);
- pid = clone(proc, (void *) sp, flags | SIGCHLD, arg);
+ pid = clone(proc, (void *) sp, flags, arg);
if (pid < 0) {
err = -errno;
printk("run_helper_thread : clone failed, errno = %d\n",
return err;
}
if (stack_out == NULL) {
- CATCH_EINTR(pid = waitpid(pid, &status, 0));
+ CATCH_EINTR(pid = waitpid(pid, &status, __WCLONE));
if (pid < 0) {
err = -errno;
printk("run_helper_thread - wait failed, errno = %d\n",
return pid;
}
-int helper_wait(int pid)
+int helper_wait(int pid, int nohang, char *pname)
{
- int ret;
+ int ret, status;
+ int wflags = __WCLONE;
- CATCH_EINTR(ret = waitpid(pid, NULL, WNOHANG));
+ if (nohang)
+ wflags |= WNOHANG;
+
+ if (!pname)
+ pname = "helper_wait";
+
+ CATCH_EINTR(ret = waitpid(pid, &status, wflags));
if (ret < 0) {
- ret = -errno;
- printk("helper_wait : waitpid failed, errno = %d\n", errno);
- }
- return ret;
+ printk(UM_KERN_ERR "%s : waitpid process %d failed, "
+ "errno = %d\n", pname, pid, errno);
+ return -errno;
+ } else if (nohang && ret == 0) {
+ printk(UM_KERN_ERR "%s : process %d has not exited\n",
+ pname, pid);
+ return -ECHILD;
+ } else if (!WIFEXITED(status) || WEXITSTATUS(status) != 0) {
+ printk(UM_KERN_ERR "%s : process %d didn't exit with "
+ "status 0\n", pname, pid);
+ return -ECHILD;
+ } else
+ return 0;
}
{
kill(pid, SIGKILL);
if (reap_child)
- CATCH_EINTR(waitpid(pid, NULL, 0));
+ CATCH_EINTR(waitpid(pid, NULL, __WALL));
}
/* This is here uniquely to have access to the userspace errno, i.e. the one
ptrace(PTRACE_KILL, pid);
ptrace(PTRACE_CONT, pid);
if (reap_child)
- CATCH_EINTR(waitpid(pid, NULL, 0));
+ CATCH_EINTR(waitpid(pid, NULL, __WALL));
}
/* Don't use the glibc version, which caches the result in TLS. It misses some
int n, status, err;
while (1) {
- CATCH_EINTR(n = waitpid(pid, &status, WUNTRACED));
+ CATCH_EINTR(n = waitpid(pid, &status, WUNTRACED | __WALL));
if ((n < 0) || !WIFSTOPPED(status))
goto bad_wait;
panic("handle_trap - continuing to end of syscall "
"failed, errno = %d\n", errno);
- CATCH_EINTR(err = waitpid(pid, &status, WUNTRACED));
+ CATCH_EINTR(err = waitpid(pid, &status, WUNTRACED | __WALL));
if ((err < 0) || !WIFSTOPPED(status) ||
(WSTOPSIG(status) != SIGTRAP + 0x80)) {
err = ptrace_dump_regs(pid);
panic("start_userspace : mmap failed, errno = %d", errno);
sp = (unsigned long) stack + UM_KERN_PAGE_SIZE - sizeof(void *);
- flags = CLONE_FILES | SIGCHLD;
+ flags = CLONE_FILES;
if (proc_mm)
flags |= CLONE_VM;
+ else
+ flags |= SIGCHLD;
pid = clone(userspace_tramp, (void *) sp, flags, (void *) stub_stack);
if (pid < 0)
panic("start_userspace : clone failed, errno = %d", errno);
do {
- CATCH_EINTR(n = waitpid(pid, &status, WUNTRACED));
+ CATCH_EINTR(n = waitpid(pid, &status, WUNTRACED | __WALL));
if (n < 0)
panic("start_userspace : wait failed, errno = %d",
errno);
"pid=%d, ptrace operation = %d, errno = %d\n",
pid, op, errno);
- CATCH_EINTR(err = waitpid(pid, &status, WUNTRACED));
+ CATCH_EINTR(err = waitpid(pid, &status, WUNTRACED | __WALL));
if (err < 0)
panic("userspace - waitpid failed, errno = %d\n",
errno);
* nothing reasonable to do if that fails.
*/
- while ((pid = waitpid(-1, NULL, WNOHANG)) > 0)
+ while ((pid = waitpid(-1, NULL, WNOHANG | __WALL)) > 0)
os_kill_ptraced_process(pid, 0);
abort();
return 0;
}
-static int res_kernel_text_pud_init(pud_t *pud, unsigned long start)
-{
- pmd_t *pmd;
- unsigned long paddr;
-
- pmd = (pmd_t *)get_safe_page(GFP_ATOMIC);
- if (!pmd)
- return -ENOMEM;
- set_pud(pud + pud_index(start), __pud(__pa(pmd) | _KERNPG_TABLE));
- for (paddr = 0; paddr < KERNEL_TEXT_SIZE; pmd++, paddr += PMD_SIZE) {
- unsigned long pe;
-
- pe = __PAGE_KERNEL_LARGE_EXEC | _PAGE_GLOBAL | paddr;
- pe &= __supported_pte_mask;
- set_pmd(pmd, __pmd(pe));
- }
-
- return 0;
-}
-
static int set_up_temporary_mappings(void)
{
unsigned long start, end, next;
- pud_t *pud;
int error;
temp_level4_pgt = (pgd_t *)get_safe_page(GFP_ATOMIC);
if (!temp_level4_pgt)
return -ENOMEM;
+ /* It is safe to reuse the original kernel mapping */
+ set_pgd(temp_level4_pgt + pgd_index(__START_KERNEL_map),
+ init_level4_pgt[pgd_index(__START_KERNEL_map)]);
+
/* Set up the direct mapping from scratch */
start = (unsigned long)pfn_to_kaddr(0);
end = (unsigned long)pfn_to_kaddr(end_pfn);
for (; start < end; start = next) {
- pud = (pud_t *)get_safe_page(GFP_ATOMIC);
+ pud_t *pud = (pud_t *)get_safe_page(GFP_ATOMIC);
if (!pud)
return -ENOMEM;
next = start + PGDIR_SIZE;
set_pgd(temp_level4_pgt + pgd_index(start),
mk_kernel_pgd(__pa(pud)));
}
-
- /* Set up the kernel text mapping from scratch */
- pud = (pud_t *)get_safe_page(GFP_ATOMIC);
- if (!pud)
- return -ENOMEM;
- error = res_kernel_text_pud_init(pud, __START_KERNEL_map);
- if (!error)
- set_pgd(temp_level4_pgt + pgd_index(__START_KERNEL_map),
- __pgd(__pa(pud) | _PAGE_TABLE));
-
- return error;
+ return 0;
}
int swsusp_arch_resume(void)
p->kobj.parent = parent;
p->kobj.ktype = ktype;
p->pd = pd;
- if (kobject_register(&p->kobj) != 0)
+ if (kobject_register(&p->kobj) != 0) {
+ kobject_put(&p->kobj);
return NULL;
+ }
return p;
}
/*
drv_attr = cpufreq_driver->attr;
while ((drv_attr) && (*drv_attr)) {
ret = sysfs_create_file(&policy->kobj, &((*drv_attr)->attr));
- if (ret)
+ if (ret) {
+ unlock_policy_rwsem_write(cpu);
goto err_out_driver_exit;
+ }
drv_attr++;
}
if (cpufreq_driver->get){
ret = sysfs_create_file(&policy->kobj, &cpuinfo_cur_freq.attr);
- if (ret)
+ if (ret) {
+ unlock_policy_rwsem_write(cpu);
goto err_out_driver_exit;
+ }
}
if (cpufreq_driver->target){
ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr);
- if (ret)
+ if (ret) {
+ unlock_policy_rwsem_write(cpu);
goto err_out_driver_exit;
+ }
}
spin_lock_irqsave(&cpufreq_driver_lock, flags);
return -1;
}
-static void __cpuexit cpufreq_stats_free_table(unsigned int cpu)
+static void cpufreq_stats_free_table(unsigned int cpu)
{
struct cpufreq_stats *stat = cpufreq_stats_table[cpu];
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
tx_to_ioat_desc(tx)->dst = addr;
}
+/**
+ * ioat_dma_memcpy_issue_pending - push potentially unrecognized appended
+ * descriptors to hw
+ * @chan: DMA channel handle
+ */
static inline void __ioat1_dma_memcpy_issue_pending(
- struct ioat_dma_chan *ioat_chan);
+ struct ioat_dma_chan *ioat_chan)
+{
+ ioat_chan->pending = 0;
+ writeb(IOAT_CHANCMD_APPEND, ioat_chan->reg_base + IOAT1_CHANCMD_OFFSET);
+}
+
+static void ioat1_dma_memcpy_issue_pending(struct dma_chan *chan)
+{
+ struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
+
+ if (ioat_chan->pending != 0) {
+ spin_lock_bh(&ioat_chan->desc_lock);
+ __ioat1_dma_memcpy_issue_pending(ioat_chan);
+ spin_unlock_bh(&ioat_chan->desc_lock);
+ }
+}
+
static inline void __ioat2_dma_memcpy_issue_pending(
- struct ioat_dma_chan *ioat_chan);
+ struct ioat_dma_chan *ioat_chan)
+{
+ ioat_chan->pending = 0;
+ writew(ioat_chan->dmacount,
+ ioat_chan->reg_base + IOAT_CHAN_DMACOUNT_OFFSET);
+}
+
+static void ioat2_dma_memcpy_issue_pending(struct dma_chan *chan)
+{
+ struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
+
+ if (ioat_chan->pending != 0) {
+ spin_lock_bh(&ioat_chan->desc_lock);
+ __ioat2_dma_memcpy_issue_pending(ioat_chan);
+ spin_unlock_bh(&ioat_chan->desc_lock);
+ }
+}
static dma_cookie_t ioat1_tx_submit(struct dma_async_tx_descriptor *tx)
{
prev = to_ioat_desc(ioat_chan->used_desc.prev);
prefetch(prev->hw);
do {
- copy = min((u32) len, ioat_chan->xfercap);
+ copy = min_t(size_t, len, ioat_chan->xfercap);
new->async_tx.ack = 1;
orig_ack = first->async_tx.ack;
new = first;
- /* ioat_chan->desc_lock is still in force in version 2 path */
-
+ /*
+ * ioat_chan->desc_lock is still in force in version 2 path
+ * it gets unlocked at end of this function
+ */
do {
- copy = min((u32) len, ioat_chan->xfercap);
+ copy = min_t(size_t, len, ioat_chan->xfercap);
new->async_tx.ack = 1;
static int ioat_dma_alloc_chan_resources(struct dma_chan *chan)
{
struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
- struct ioat_desc_sw *desc = NULL;
+ struct ioat_desc_sw *desc;
u16 chanctrl;
u32 chanerr;
int i;
static struct ioat_desc_sw *
ioat1_dma_get_next_descriptor(struct ioat_dma_chan *ioat_chan)
{
- struct ioat_desc_sw *new = NULL;
+ struct ioat_desc_sw *new;
if (!list_empty(&ioat_chan->free_desc)) {
new = to_ioat_desc(ioat_chan->free_desc.next);
} else {
/* try to get another desc */
new = ioat_dma_alloc_descriptor(ioat_chan, GFP_ATOMIC);
- /* will this ever happen? */
- /* TODO add upper limit on these */
- BUG_ON(!new);
+ if (!new) {
+ dev_err(&ioat_chan->device->pdev->dev,
+ "alloc failed\n");
+ return NULL;
+ }
}
prefetch(new->hw);
static struct ioat_desc_sw *
ioat2_dma_get_next_descriptor(struct ioat_dma_chan *ioat_chan)
{
- struct ioat_desc_sw *new = NULL;
+ struct ioat_desc_sw *new;
/*
* used.prev points to where to start processing
if (ioat_chan->used_desc.prev &&
ioat_chan->used_desc.next == ioat_chan->used_desc.prev->prev) {
- struct ioat_desc_sw *desc = NULL;
- struct ioat_desc_sw *noop_desc = NULL;
+ struct ioat_desc_sw *desc;
+ struct ioat_desc_sw *noop_desc;
int i;
/* set up the noop descriptor */
ioat_chan->pending++;
ioat_chan->dmacount++;
- /* get a few more descriptors */
+ /* try to get a few more descriptors */
for (i = 16; i; i--) {
desc = ioat_dma_alloc_descriptor(ioat_chan, GFP_ATOMIC);
- BUG_ON(!desc);
+ if (!desc) {
+ dev_err(&ioat_chan->device->pdev->dev,
+ "alloc failed\n");
+ break;
+ }
list_add_tail(&desc->node, ioat_chan->used_desc.next);
desc->hw->next
spin_lock_bh(&ioat_chan->desc_lock);
new = ioat_dma_get_next_descriptor(ioat_chan);
- new->len = len;
spin_unlock_bh(&ioat_chan->desc_lock);
- return new ? &new->async_tx : NULL;
+ if (new) {
+ new->len = len;
+ return &new->async_tx;
+ } else
+ return NULL;
}
static struct dma_async_tx_descriptor *ioat2_dma_prep_memcpy(
spin_lock_bh(&ioat_chan->desc_lock);
new = ioat2_dma_get_next_descriptor(ioat_chan);
- new->len = len;
-
- /* leave ioat_chan->desc_lock set in version 2 path */
- return new ? &new->async_tx : NULL;
-}
+ /*
+ * leave ioat_chan->desc_lock set in ioat 2 path
+ * it will get unlocked at end of tx_submit
+ */
-/**
- * ioat_dma_memcpy_issue_pending - push potentially unrecognized appended
- * descriptors to hw
- * @chan: DMA channel handle
- */
-static inline void __ioat1_dma_memcpy_issue_pending(
- struct ioat_dma_chan *ioat_chan)
-{
- ioat_chan->pending = 0;
- writeb(IOAT_CHANCMD_APPEND, ioat_chan->reg_base + IOAT1_CHANCMD_OFFSET);
-}
-
-static void ioat1_dma_memcpy_issue_pending(struct dma_chan *chan)
-{
- struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
-
- if (ioat_chan->pending != 0) {
- spin_lock_bh(&ioat_chan->desc_lock);
- __ioat1_dma_memcpy_issue_pending(ioat_chan);
- spin_unlock_bh(&ioat_chan->desc_lock);
- }
-}
-
-static inline void __ioat2_dma_memcpy_issue_pending(
- struct ioat_dma_chan *ioat_chan)
-{
- ioat_chan->pending = 0;
- writew(ioat_chan->dmacount,
- ioat_chan->reg_base + IOAT_CHAN_DMACOUNT_OFFSET);
-}
-
-static void ioat2_dma_memcpy_issue_pending(struct dma_chan *chan)
-{
- struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
-
- if (ioat_chan->pending != 0) {
- spin_lock_bh(&ioat_chan->desc_lock);
- __ioat2_dma_memcpy_issue_pending(ioat_chan);
- spin_unlock_bh(&ioat_chan->desc_lock);
- }
+ if (new) {
+ new->len = len;
+ return &new->async_tx;
+ } else
+ return NULL;
}
static void ioat_dma_cleanup_tasklet(unsigned long data)
static void ioat_dma_test_callback(void *dma_async_param)
{
printk(KERN_ERR "ioatdma: ioat_dma_test_callback(%p)\n",
- dma_async_param);
+ dma_async_param);
}
/**
u8 *src;
u8 *dest;
struct dma_chan *dma_chan;
- struct dma_async_tx_descriptor *tx = NULL;
+ struct dma_async_tx_descriptor *tx;
dma_addr_t addr;
dma_cookie_t cookie;
int err = 0;
err_dma_pool:
kfree(device);
err_kzalloc:
- dev_err(&device->pdev->dev,
+ dev_err(&pdev->dev,
"Intel(R) I/OAT DMA Engine initialization failed\n");
return NULL;
}
dma_cookie_t completed_cookie;
unsigned long last_completion;
- u32 xfercap; /* XFERCAP register value expanded out */
+ size_t xfercap; /* XFERCAP register value expanded out */
spinlock_t cleanup_lock;
spinlock_t desc_lock;
ret = pmac_suspend_devices();
if (ret) {
pbook_free_pci_save();
+ iounmap(mem_ctrl);
printk(KERN_ERR "Sleep rejected by devices\n");
return ret;
}
{
.procname = "timeslice",
.data = NULL,
- .maxlen = sizeof(int),
+ .maxlen = sizeof(unsigned long),
.mode = 0644,
.proc_handler = &proc_doulongvec_ms_jiffies_minmax,
.extra1 = (void*) &parport_min_timeslice_value,
goto out;
}
- ret = request_irq(irq, at32_rtc_interrupt, IRQF_SHARED, "rtc", rtc);
- if (ret) {
- dev_dbg(&pdev->dev, "could not request irq %d\n", irq);
- goto out;
- }
-
rtc->irq = irq;
rtc->regs = ioremap(regs->start, regs->end - regs->start + 1);
if (!rtc->regs) {
ret = -ENOMEM;
dev_dbg(&pdev->dev, "could not map I/O memory\n");
- goto out_free_irq;
+ goto out;
}
spin_lock_init(&rtc->lock);
| RTC_BIT(CTRL_EN));
}
+ ret = request_irq(irq, at32_rtc_interrupt, IRQF_SHARED, "rtc", rtc);
+ if (ret) {
+ dev_dbg(&pdev->dev, "could not request irq %d\n", irq);
+ goto out_iounmap;
+ }
+
rtc->rtc = rtc_device_register(pdev->name, &pdev->dev,
&at32_rtc_ops, THIS_MODULE);
if (IS_ERR(rtc->rtc)) {
dev_dbg(&pdev->dev, "could not register rtc device\n");
ret = PTR_ERR(rtc->rtc);
- goto out_iounmap;
+ goto out_free_irq;
}
platform_set_drvdata(pdev, rtc);
return 0;
-out_iounmap:
- iounmap(rtc->regs);
out_free_irq:
free_irq(irq, rtc);
+out_iounmap:
+ iounmap(rtc->regs);
out:
kfree(rtc);
return ret;
help
Enabling this option allows you to explicitly choose which
compression modules, if any, are enabled in JFFS2. Removing
- compressors and mean you cannot read existing file systems,
+ compressors can mean you cannot read existing file systems,
and enabling experimental compressors can mean that you
write a file system which cannot be read by a standard kernel.
}
#endif
-static inline void flush_warnings(struct dquot **dquots, char *warntype)
+static inline void flush_warnings(struct dquot * const *dquots, char *warntype)
{
int i;
for (cnt = 0; cnt < MAXQUOTAS; cnt++)
if (inode->i_dquot[cnt])
mark_dquot_dirty(inode->i_dquot[cnt]);
- flush_warnings((struct dquot **)inode->i_dquot, warntype);
+ flush_warnings(inode->i_dquot, warntype);
up_read(&sb_dqopt(inode->i_sb)->dqptr_sem);
return ret;
}
struct ecryptfs_global_auth_tok *new_auth_tok;
int rc = 0;
- new_auth_tok = kmem_cache_alloc(ecryptfs_global_auth_tok_cache,
+ new_auth_tok = kmem_cache_zalloc(ecryptfs_global_auth_tok_cache,
GFP_KERNEL);
if (!new_auth_tok) {
rc = -ENOMEM;
lower_mnt = nd.mnt;
ecryptfs_set_superblock_lower(sb, lower_root->d_sb);
sb->s_maxbytes = lower_root->d_sb->s_maxbytes;
+ sb->s_blocksize = lower_root->d_sb->s_blocksize;
ecryptfs_set_dentry_lower(sb->s_root, lower_root);
ecryptfs_set_dentry_lower_mnt(sb->s_root, lower_mnt);
rc = ecryptfs_interpose(lower_root, sb->s_root, sb, 0);
return 0;
}
+/* This function must zero any hole we create */
static int ecryptfs_prepare_write(struct file *file, struct page *page,
unsigned from, unsigned to)
{
int rc = 0;
+ loff_t prev_page_end_size;
- if (from == 0 && to == PAGE_CACHE_SIZE)
- goto out; /* If we are writing a full page, it will be
- up to date. */
if (!PageUptodate(page)) {
rc = ecryptfs_read_lower_page_segment(page, page->index, 0,
PAGE_CACHE_SIZE,
} else
SetPageUptodate(page);
}
- if (page->index != 0) {
- loff_t end_of_prev_pg_pos =
- (((loff_t)page->index << PAGE_CACHE_SHIFT) - 1);
- if (end_of_prev_pg_pos > i_size_read(page->mapping->host)) {
+ prev_page_end_size = ((loff_t)page->index << PAGE_CACHE_SHIFT);
+
+ /*
+ * If creating a page or more of holes, zero them out via truncate.
+ * Note, this will increase i_size.
+ */
+ if (page->index != 0) {
+ if (prev_page_end_size > i_size_read(page->mapping->host)) {
rc = ecryptfs_truncate(file->f_path.dentry,
- end_of_prev_pg_pos);
+ prev_page_end_size);
if (rc) {
printk(KERN_ERR "Error on attempt to "
"truncate to (higher) offset [%lld];"
- " rc = [%d]\n", end_of_prev_pg_pos, rc);
+ " rc = [%d]\n", prev_page_end_size, rc);
goto out;
}
}
- if (end_of_prev_pg_pos + 1 > i_size_read(page->mapping->host))
- zero_user_page(page, 0, PAGE_CACHE_SIZE, KM_USER0);
+ }
+ /*
+ * Writing to a new page, and creating a small hole from start of page?
+ * Zero it out.
+ */
+ if ((i_size_read(page->mapping->host) == prev_page_end_size) &&
+ (from != 0)) {
+ zero_user_page(page, 0, PAGE_CACHE_SIZE, KM_USER0);
}
out:
return rc;
loff_t pos;
int rc = 0;
+ /*
+ * if we are writing beyond current size, then start pos
+ * at the current size - we'll fill in zeros from there.
+ */
if (offset > ecryptfs_file_size)
pos = ecryptfs_file_size;
else
if (num_bytes > total_remaining_bytes)
num_bytes = total_remaining_bytes;
if (pos < offset) {
+ /* remaining zeros to write, up to destination offset */
size_t total_remaining_zeros = (offset - pos);
if (num_bytes > total_remaining_zeros)
}
}
ecryptfs_page_virt = kmap_atomic(ecryptfs_page, KM_USER0);
+
+ /*
+ * pos: where we're now writing, offset: where the request was
+ * If current pos is before request, we are filling zeros
+ * If we are at or beyond request, we are writing the *data*
+ * If we're in a fresh page beyond eof, zero it in either case
+ */
+ if (pos < offset || !start_offset_in_page) {
+ /* We are extending past the previous end of the file.
+ * Fill in zero values to the end of the page */
+ memset(((char *)ecryptfs_page_virt
+ + start_offset_in_page), 0,
+ PAGE_CACHE_SIZE - start_offset_in_page);
+ }
+
+ /* pos >= offset, we are now writing the data request */
if (pos >= offset) {
memcpy(((char *)ecryptfs_page_virt
+ start_offset_in_page),
(data + data_offset), num_bytes);
data_offset += num_bytes;
- } else {
- /* We are extending past the previous end of the file.
- * Fill in zero values up to the start of where we
- * will be writing data. */
- memset(((char *)ecryptfs_page_virt
- + start_offset_in_page), 0, num_bytes);
}
kunmap_atomic(ecryptfs_page_virt, KM_USER0);
flush_dcache_page(ecryptfs_page);
sbi->s_blocks_per_group = le32_to_cpu(es->s_blocks_per_group);
sbi->s_frags_per_group = le32_to_cpu(es->s_frags_per_group);
sbi->s_inodes_per_group = le32_to_cpu(es->s_inodes_per_group);
- if (EXT3_INODE_SIZE(sb) == 0)
+ if (EXT3_INODE_SIZE(sb) == 0 || EXT3_INODES_PER_GROUP(sb) == 0)
goto cantfind_ext3;
sbi->s_inodes_per_block = blocksize / EXT3_INODE_SIZE(sb);
if (sbi->s_inodes_per_block == 0)
sbi->s_desc_size = EXT4_MIN_DESC_SIZE;
sbi->s_blocks_per_group = le32_to_cpu(es->s_blocks_per_group);
sbi->s_inodes_per_group = le32_to_cpu(es->s_inodes_per_group);
- if (EXT4_INODE_SIZE(sb) == 0)
+ if (EXT4_INODE_SIZE(sb) == 0 || EXT4_INODES_PER_GROUP(sb) == 0)
goto cantfind_ext4;
sbi->s_inodes_per_block = blocksize / EXT4_INODE_SIZE(sb);
if (sbi->s_inodes_per_block == 0)
__EXTERN_INLINE u8
IO_CONCAT(__IO_PREFIX,readb)(const volatile void __iomem *a)
{
- return IO_CONCAT(__IO_PREFIX,ioread8)((void __iomem *)a);
+ void __iomem *addr = (void __iomem *)a;
+ return IO_CONCAT(__IO_PREFIX,ioread8)(addr);
}
__EXTERN_INLINE u16
IO_CONCAT(__IO_PREFIX,readw)(const volatile void __iomem *a)
{
- return IO_CONCAT(__IO_PREFIX,ioread16)((void __iomem *)a);
+ void __iomem *addr = (void __iomem *)a;
+ return IO_CONCAT(__IO_PREFIX,ioread16)(addr);
}
__EXTERN_INLINE void
IO_CONCAT(__IO_PREFIX,writeb)(u8 b, volatile void __iomem *a)
{
- IO_CONCAT(__IO_PREFIX,iowrite8)(b, (void __iomem *)a);
+ void __iomem *addr = (void __iomem *)a;
+ IO_CONCAT(__IO_PREFIX,iowrite8)(b, addr);
}
__EXTERN_INLINE void
IO_CONCAT(__IO_PREFIX,writew)(u16 b, volatile void __iomem *a)
{
- IO_CONCAT(__IO_PREFIX,iowrite16)(b, (void __iomem *)a);
+ void __iomem *addr = (void __iomem *)a;
+ IO_CONCAT(__IO_PREFIX,iowrite16)(b, addr);
}
#endif
#define _ASM_GENERIC__TLB_H
#include <linux/swap.h>
+#include <linux/quicklist.h>
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
static inline void
tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
{
+#ifdef CONFIG_QUICKLIST
+ tlb->need_flush += &__get_cpu_var(quicklist)[0].nr_pages != 0;
+#endif
tlb_flush_mmu(tlb, start, end);
/* keep the page table cache within bounds */
header-y += taskstats.h
header-y += telephony.h
header-y += termios.h
-header-y += ticable.h
header-y += times.h
header-y += tiocl.h
header-y += tipc.h
#include <linux/types.h>
+typedef unsigned short apm_event_t;
+typedef unsigned short apm_eventinfo_t;
+
struct apm_bios_info {
__u16 version;
__u16 cseg;
#ifdef __KERNEL__
-typedef unsigned short apm_event_t;
-typedef unsigned short apm_eventinfo_t;
-
#define APM_CS (GDT_ENTRY_APMBIOS_BASE * 8)
#define APM_CS_16 (APM_CS + 8)
#define APM_DS (APM_CS_16 + 8)
extern unsigned long max_huge_pages;
extern unsigned long hugepages_treat_as_movable;
-extern int hugetlb_dynamic_pool;
+extern unsigned long nr_overcommit_huge_pages;
extern const unsigned long hugetlb_zero, hugetlb_infinity;
extern int sysctl_hugetlb_shm_group;
},
{
.ctl_name = CTL_UNNUMBERED,
- .procname = "hugetlb_dynamic_pool",
- .data = &hugetlb_dynamic_pool,
- .maxlen = sizeof(hugetlb_dynamic_pool),
+ .procname = "nr_overcommit_hugepages",
+ .data = &nr_overcommit_huge_pages,
+ .maxlen = sizeof(nr_overcommit_huge_pages),
.mode = 0644,
- .proc_handler = &proc_dointvec,
+ .proc_handler = &proc_doulongvec_minmax,
},
#endif
{
{}
};
-static struct trans_ctl_table trans_net_ax25_table[] = {
+static struct trans_ctl_table trans_net_ax25_param_table[] = {
{ NET_AX25_IP_DEFAULT_MODE, "ip_default_mode" },
{ NET_AX25_DEFAULT_MODE, "ax25_default_mode" },
{ NET_AX25_BACKOFF_TYPE, "backoff_type" },
{}
};
+static struct trans_ctl_table trans_net_ax25_table[] = {
+ { 0, NULL, trans_net_ax25_param_table },
+ {}
+};
+
static struct trans_ctl_table trans_net_bridge_table[] = {
{ NET_BRIDGE_NF_CALL_ARPTABLES, "bridge-nf-call-arptables" },
{ NET_BRIDGE_NF_CALL_IPTABLES, "bridge-nf-call-iptables" },
def_bool y
depends on SPARSEMEM && !SPARSEMEM_STATIC
-#
-# SPARSEMEM_VMEMMAP uses a virtually mapped mem_map to optimise pfn_to_page
-# and page_to_pfn. The most efficient option where kernel virtual space is
-# not under pressure.
-#
config SPARSEMEM_VMEMMAP_ENABLE
def_bool n
config SPARSEMEM_VMEMMAP
- bool
- depends on SPARSEMEM
- default y if (SPARSEMEM_VMEMMAP_ENABLE)
+ bool "Sparse Memory virtual memmap"
+ depends on SPARSEMEM && SPARSEMEM_VMEMMAP_ENABLE
+ default y
+ help
+ SPARSEMEM_VMEMMAP uses a virtually mapped memmap to optimise
+ pfn_to_page and page_to_pfn operations. This is the most
+ efficient option when sufficient kernel resources are available.
# eventually, we can have this option just 'select SPARSEMEM'
config MEMORY_HOTPLUG
static unsigned int surplus_huge_pages_node[MAX_NUMNODES];
static gfp_t htlb_alloc_mask = GFP_HIGHUSER;
unsigned long hugepages_treat_as_movable;
-int hugetlb_dynamic_pool;
+unsigned long nr_overcommit_huge_pages;
static int hugetlb_next_nid;
/*
unsigned long address)
{
struct page *page;
+ unsigned int nid;
- /* Check if the dynamic pool is enabled */
- if (!hugetlb_dynamic_pool)
+ /*
+ * Assume we will successfully allocate the surplus page to
+ * prevent racing processes from causing the surplus to exceed
+ * overcommit
+ *
+ * This however introduces a different race, where a process B
+ * tries to grow the static hugepage pool while alloc_pages() is
+ * called by process A. B will only examine the per-node
+ * counters in determining if surplus huge pages can be
+ * converted to normal huge pages in adjust_pool_surplus(). A
+ * won't be able to increment the per-node counter, until the
+ * lock is dropped by B, but B doesn't drop hugetlb_lock until
+ * no more huge pages can be converted from surplus to normal
+ * state (and doesn't try to convert again). Thus, we have a
+ * case where a surplus huge page exists, the pool is grown, and
+ * the surplus huge page still exists after, even though it
+ * should just have been converted to a normal huge page. This
+ * does not leak memory, though, as the hugepage will be freed
+ * once it is out of use. It also does not allow the counters to
+ * go out of whack in adjust_pool_surplus() as we don't modify
+ * the node values until we've gotten the hugepage and only the
+ * per-node value is checked there.
+ */
+ spin_lock(&hugetlb_lock);
+ if (surplus_huge_pages >= nr_overcommit_huge_pages) {
+ spin_unlock(&hugetlb_lock);
return NULL;
+ } else {
+ nr_huge_pages++;
+ surplus_huge_pages++;
+ }
+ spin_unlock(&hugetlb_lock);
page = alloc_pages(htlb_alloc_mask|__GFP_COMP|__GFP_NOWARN,
HUGETLB_PAGE_ORDER);
+
+ spin_lock(&hugetlb_lock);
if (page) {
+ nid = page_to_nid(page);
set_compound_page_dtor(page, free_huge_page);
- spin_lock(&hugetlb_lock);
- nr_huge_pages++;
- nr_huge_pages_node[page_to_nid(page)]++;
- surplus_huge_pages++;
- surplus_huge_pages_node[page_to_nid(page)]++;
- spin_unlock(&hugetlb_lock);
+ /*
+ * We incremented the global counters already
+ */
+ nr_huge_pages_node[nid]++;
+ surplus_huge_pages_node[nid]++;
+ } else {
+ nr_huge_pages--;
+ surplus_huge_pages--;
}
+ spin_unlock(&hugetlb_lock);
return page;
}
* Increase the pool size
* First take pages out of surplus state. Then make up the
* remaining difference by allocating fresh huge pages.
+ *
+ * We might race with alloc_buddy_huge_page() here and be unable
+ * to convert a surplus huge page to a normal huge page. That is
+ * not critical, though, it just means the overall size of the
+ * pool might be one hugepage larger than it needs to be, but
+ * within all the constraints specified by the sysctls.
*/
spin_lock(&hugetlb_lock);
while (surplus_huge_pages && count > persistent_huge_pages) {
* to keep enough around to satisfy reservations). Then place
* pages into surplus state as needed so the pool will shrink
* to the desired size as pages become free.
+ *
+ * By placing pages into the surplus state independent of the
+ * overcommit value, we are allowing the surplus pool size to
+ * exceed overcommit. There are few sane options here. Since
+ * alloc_buddy_huge_page() is checking the global counter,
+ * though, we'll note that we're not allowed to exceed surplus
+ * and won't grow the pool anywhere else. Not until one of the
+ * sysctls are changed, or the surplus pages go out of use.
*/
min_count = resv_huge_pages + nr_huge_pages - free_huge_pages;
min_count = max(count, min_count);
struct page *page = __rmqueue(zone, order, migratetype);
if (unlikely(page == NULL))
break;
+
+ /*
+ * Split buddy pages returned by expand() are received here
+ * in physical page order. The page is added to the callers and
+ * list and the list head then moves forward. From the callers
+ * perspective, the linked list is ordered by page number in
+ * some conditions. This is useful for IO devices that can
+ * merge IO requests if the physical pages are ordered
+ * properly.
+ */
list_add(&page->lru, list);
set_page_private(page, migratetype);
+ list = &page->lru;
}
spin_unlock(&zone->lock);
return i;
void **object;
struct page *new;
- /* We handle __GFP_ZERO in the caller */
- gfpflags &= ~__GFP_ZERO;
-
if (!c->page)
goto new_slab;
return -EEXIST;
section = sparse_index_alloc(nid);
+ if (!section)
+ return -ENOMEM;
/*
* This lock keeps two different sections from
* reallocating for the same index
* no locking for this, because it does its own
* plus, it does a kmalloc
*/
- sparse_index_init(section_nr, pgdat->node_id);
+ ret = sparse_index_init(section_nr, pgdat->node_id);
+ if (ret < 0 && ret != -EEXIST)
+ return ret;
memmap = kmalloc_section_memmap(section_nr, pgdat->node_id, nr_pages);
+ if (!memmap)
+ return -ENOMEM;
usemap = __kmalloc_section_usemap();
+ if (!usemap) {
+ __kfree_section_memmap(memmap, nr_pages);
+ return -ENOMEM;
+ }
pgdat_resize_lock(pgdat, &flags);
goto out;
}
- if (!usemap) {
- ret = -ENOMEM;
- goto out;
- }
ms->section_mem_map |= SECTION_MARKED_PRESENT;
ret = sparse_init_one_section(ms, section_nr, memmap, usemap);
out:
pgdat_resize_unlock(pgdat, &flags);
- if (ret <= 0)
+ if (ret <= 0) {
+ kfree(usemap);
__kfree_section_memmap(memmap, nr_pages);
+ }
return ret;
}
#endif
for l in os.popen("nm --size-sort " + file).readlines():
size, type, name = l[:-1].split()
if type in "tTdDbB":
- if "." in name: name = "static." + name.split(".")[0]
+ # function names begin with '.' on 64-bit powerpc
+ if "." in name[1:]: name = "static." + name.split(".")[0]
sym[name] = sym.get(name, 0) + int(size, 16)
return sym