sched: use highest_prio.next to optimize pull operations
authorGregory Haskins <ghaskins@novell.com>
Mon, 29 Dec 2008 14:39:50 +0000 (09:39 -0500)
committerGregory Haskins <ghaskins@novell.com>
Mon, 29 Dec 2008 14:39:50 +0000 (09:39 -0500)
We currently take the rq->lock for every cpu in an overload state during
pull_rt_tasks().  However, we now have enough information via the
highest_prio.[curr|next] fields to determine if there is any tasks of
interest to warrant the overhead of the rq->lock, before we actually take
it.  So we use this information to reduce lock contention during the
pull for the case where the source-rq doesnt have tasks that preempt
the current task.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
kernel/sched_rt.c

index f8fb3edadcaada905bbb14c319587d5d998be655..d047f288c411d536fe05dd78e04dc4b3da5ecc1c 100644 (file)
@@ -1218,6 +1218,18 @@ static int pull_rt_task(struct rq *this_rq)
                        continue;
 
                src_rq = cpu_rq(cpu);
+
+               /*
+                * Don't bother taking the src_rq->lock if the next highest
+                * task is known to be lower-priority than our current task.
+                * This may look racy, but if this value is about to go
+                * logically higher, the src_rq will push this task away.
+                * And if its going logically lower, we do not care
+                */
+               if (src_rq->rt.highest_prio.next >=
+                   this_rq->rt.highest_prio.curr)
+                       continue;
+
                /*
                 * We can potentially drop this_rq's lock in
                 * double_lock_balance, and another CPU could