rcu: Allow RCU quiescent-state forcing to be preempted
authorPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Mon, 25 Jun 2012 15:41:11 +0000 (08:41 -0700)
committerPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Sun, 23 Sep 2012 14:41:54 +0000 (07:41 -0700)
RCU quiescent-state forcing is currently carried out without preemption
points, which can result in excessive latency spikes on large systems
(many hundreds or thousands of CPUs).  This patch therefore inserts
a voluntary preemption point into force_qs_rnp(), which should greatly
reduce the magnitude of these spikes.

Reported-by: Mike Galbraith <mgalbraith@suse.de>
Reported-by: Dimitri Sivanich <sivanich@sgi.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
kernel/rcutree.c

index 6182686de4a671513b1a3039943b05c3621f2a12..723e2e72307429597fb7d43204273602ded7ec84 100644 (file)
@@ -1767,6 +1767,7 @@ static void force_qs_rnp(struct rcu_state *rsp, int (*f)(struct rcu_data *))
        struct rcu_node *rnp;
 
        rcu_for_each_leaf_node(rsp, rnp) {
+               cond_resched();
                mask = 0;
                raw_spin_lock_irqsave(&rnp->lock, flags);
                if (!rcu_gp_in_progress(rsp)) {