sched: Revert need_resched() to look at TIF_NEED_RESCHED
authorPeter Zijlstra <peterz@infradead.org>
Fri, 27 Sep 2013 15:30:03 +0000 (17:30 +0200)
committerIngo Molnar <mingo@kernel.org>
Sat, 28 Sep 2013 08:04:47 +0000 (10:04 +0200)
Yuanhan reported a serious throughput regression in his pigz
benchmark. Using the ftrace patch I found that several idle
paths need more TLC before we can switch the generic
need_resched() over to preempt_need_resched.

The preemption paths benefit most from preempt_need_resched and
do indeed use it; all other need_resched() users don't really
care that much so reverting need_resched() back to
tif_need_resched() is the simple and safe solution.

Reported-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: lkp@linux.intel.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20130927153003.GF15690@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/include/asm/preempt.h
include/asm-generic/preempt.h
include/linux/sched.h

index 1de41690ff997f677ec061581ab61f44f05075db..8729723636fd1632eebd92197dd292fa26de10d7 100644 (file)
@@ -79,14 +79,6 @@ static __always_inline bool __preempt_count_dec_and_test(void)
        GEN_UNARY_RMWcc("decl", __preempt_count, __percpu_arg(0), "e");
 }
 
-/*
- * Returns true when we need to resched -- even if we can not.
- */
-static __always_inline bool need_resched(void)
-{
-       return unlikely(test_preempt_need_resched());
-}
-
 /*
  * Returns true when we need to resched and can (barring IRQ state).
  */
index 5dc14ed3791c2afa42de87c8eb609ed997ed409c..ddf2b420ac8f81621ec9dd088e8379f8b686cd94 100644 (file)
@@ -84,14 +84,6 @@ static __always_inline bool __preempt_count_dec_and_test(void)
        return !--*preempt_count_ptr();
 }
 
-/*
- * Returns true when we need to resched -- even if we can not.
- */
-static __always_inline bool need_resched(void)
-{
-       return unlikely(test_preempt_need_resched());
-}
-
 /*
  * Returns true when we need to resched and can (barring IRQ state).
  */
index b09798b672f3036a92c03daf9d78f6217a6af1b3..2ac5285db4344a01ced312656cef38cd2f415b73 100644 (file)
@@ -2577,6 +2577,11 @@ static inline bool __must_check current_clr_polling_and_test(void)
 }
 #endif
 
+static __always_inline bool need_resched(void)
+{
+       return unlikely(tif_need_resched());
+}
+
 /*
  * Thread group CPU time accounting.
  */