From 743cb1ff191f00fee653212bdbcee1e56086d6ce Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 29 Jul 2014 17:00:21 +0200 Subject: [PATCH] sched/fair: Make calculate_imbalance() independent Rik noticed that calculate_imbalance() relies on update_sd_pick_busiest() to guarantee that busiest->sum_nr_running > busiest->group_capacity_factor. Break this implicit assumption (with the intent of not providing it anymore) by having calculat_imbalance() verify it and not rely on others. Reported-by: Rik van Riel Signed-off-by: Peter Zijlstra Acked-by: Vincent Guittot Cc: Linus Torvalds Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/20140729152631.GW12054@laptop.lan Signed-off-by: Ingo Molnar --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bfa3c86d0d68..e9477e6193fc 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6248,7 +6248,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s return fix_small_imbalance(env, sds); } - if (!busiest->group_imb) { + if (busiest->sum_nr_running > busiest->group_capacity_factor) { /* * Don't want to pull so many tasks that a group would go idle. * Except of course for the group_imb case, since then we might -- 2.34.1