From: Srivatsa S. Bhat Date: Thu, 22 May 2014 00:44:06 +0000 (+1000) Subject: CPU hotplug, smp: Flush any pending IPI callbacks before CPU offline X-Git-Tag: next-20140530~2^2~58 X-Git-Url: https://git.karo-electronics.de/?a=commitdiff_plain;h=ab7a42783d939cdbe729c18ab32dbf0d25746ea2;p=karo-tx-linux.git CPU hotplug, smp: Flush any pending IPI callbacks before CPU offline During CPU offline, in the stop-machine loop, we use 2 separate stages to disable interrupts, to ensure that the CPU going offline doesn't get any new IPIs from the other CPUs after it has gone offline. However, an IPI sent much earlier might arrive late on the target CPU (possibly _after_ the CPU has gone offline) due to hardware latencies, and due to this, the smp-call-function callbacks queued on the outgoing CPU might not get noticed (and hence not executed) at all. This is somewhat theoretical, but in any case, it makes sense to explicitly loop through the call_single_queue and flush any pending callbacks before the CPU goes completely offline. So, flush the queued smp-call-function callbacks in the MULTI_STOP_DISABLE_IRQ_ACTIVE stage, after disabling interrupts on the active CPU. This can be trivially achieved by invoking the generic_smp_call_function_single_interrupt() function itself (and since the outgoing CPU is still online at this point, we won't trigger the "IPI to offline CPU" warning in this function; so we are safe to call it here). This way, we would have handled all the queued callbacks before going offline, and also, no new IPIs can be sent by the other CPUs to the outgoing CPU at that point, because they will all be executing the stop-machine code with interrupts disabled. Signed-off-by: Srivatsa S. Bhat Suggested-by: Frederic Weisbecker Reviewed-by: Tejun Heo Cc: Peter Zijlstra Cc: Oleg Nesterov Signed-off-by: Andrew Morton --- diff --git a/include/linux/smp.h b/include/linux/smp.h index 633f5edd7470..e6b090d73c07 100644 --- a/include/linux/smp.h +++ b/include/linux/smp.h @@ -151,6 +151,8 @@ smp_call_function_any(const struct cpumask *mask, smp_call_func_t func, static inline void kick_all_cpus_sync(void) { } +static inline void generic_smp_call_function_single_interrupt(void) { } + #endif /* !SMP */ /* diff --git a/kernel/smp.c b/kernel/smp.c index 306f8180b0d5..b765167406b3 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -177,9 +177,18 @@ static int generic_exec_single(int cpu, struct call_single_data *csd, return 0; } -/* - * Invoked by arch to handle an IPI for call function single. Must be - * called from the arch with interrupts disabled. +/** + * generic_smp_call_function_single_interrupt - Execute SMP IPI callbacks + * + * Invoked by arch to handle an IPI for call function single. + * + * This is also invoked by a CPU about to go offline, to flush any pending + * smp-call-function callbacks queued on this CPU (including those for which + * the source CPU's IPIs might not have been received on this CPU yet). + * This ensures that all pending IPI callbacks are run before the CPU goes + * completely offline. + * + * Must be called with interrupts disabled. */ void generic_smp_call_function_single_interrupt(void) { @@ -187,6 +196,8 @@ void generic_smp_call_function_single_interrupt(void) struct call_single_data *csd, *csd_next; static bool warned; + WARN_ON(!irqs_disabled()); + entry = llist_del_all(&__get_cpu_var(call_single_queue)); entry = llist_reverse_order(entry); diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c index 9486c3ad5f77..9973a6928a06 100644 --- a/kernel/stop_machine.c +++ b/kernel/stop_machine.c @@ -21,6 +21,7 @@ #include #include #include +#include /* * Structure to determine completion condition and record errors. May @@ -223,6 +224,16 @@ static int multi_cpu_stop(void *data) if (is_active) { local_irq_disable(); hard_irq_disable(); + + /* + * IPIs (from the inactive CPUs) might + * arrive late due to hardware latencies. + * So flush out any pending IPI callbacks + * explicitly, to ensure that the outgoing + * CPU doesn't go offline with work still + * pending (during CPU hotplug). + */ + generic_smp_call_function_single_interrupt(); } break; case MULTI_STOP_RUN: