From: Rik van Riel Date: Fri, 6 Feb 2015 20:02:05 +0000 (-0500) Subject: x86/fpu: Also check fpu_lazy_restore() when use_eager_fpu() X-Git-Url: https://git.karo-electronics.de/?a=commitdiff_plain;h=728e53fef429a0f3c9dda3587c3ccc57ad268b70;p=linux-beck.git x86/fpu: Also check fpu_lazy_restore() when use_eager_fpu() With Oleg's patch: 33a3ebdc077f ("x86, fpu: Don't abuse has_fpu in __kernel_fpu_begin/end()") kernel threads no longer have an FPU state even on systems with use_eager_fpu(). That in turn means that a task may still have its FPU state loaded in the FPU registers, if the task only got interrupted by kernel threads from when it went to sleep, to when it woke up again. In that case, there is no need to restore the FPU state for this task, since it is still in the registers. The kernel can simply use the same logic to determine this as is used for !use_eager_fpu() systems. Signed-off-by: Rik van Riel Cc: Linus Torvalds Cc: Oleg Nesterov Link: http://lkml.kernel.org/r/1423252925-14451-9-git-send-email-riel@redhat.com Signed-off-by: Borislav Petkov --- diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h index e5f8f8eaf225..19fb41cc4755 100644 --- a/arch/x86/include/asm/fpu-internal.h +++ b/arch/x86/include/asm/fpu-internal.h @@ -458,7 +458,7 @@ static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct ta task_disable_lazy_fpu_restore(old); if (fpu.preload) { new->thread.fpu_counter++; - if (!use_eager_fpu() && fpu_lazy_restore(new, cpu)) + if (fpu_lazy_restore(new, cpu)) fpu.preload = 0; else prefetch(new->thread.fpu.state);