From 5fef33f1bad3f3745ba3e14d78fd7e2c8d8db993 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 24 Jun 2016 15:53:54 +0200 Subject: [PATCH] UPSTREAM: sched/fair: Fix effective_load() to consistently use smoothed load Starting with the following commit: fde7d22e01aa ("sched/fair: Fix overly small weight for interactive group entities") calc_tg_weight() doesn't compute the right value as expected by effective_load(). The difference is in the 'correction' term. In order to ensure \Sum rw_j >= rw_i we cannot use tg->load_avg directly, since that might be lagging a correction on the current cfs_rq->avg.load_avg value. Therefore we use tg->load_avg - cfs_rq->tg_load_avg_contrib + cfs_rq->avg.load_avg. Now, per the referenced commit, calc_tg_weight() doesn't use cfs_rq->avg.load_avg, as is later used in @w, but uses cfs_rq->load.weight instead. So stop using calc_tg_weight() and do it explicitly. The effects of this bug are wake_affine() making randomly poor choices in cgroup-intense workloads. Change-Id: I1c0058ff674650cf295c8dc3b88a5a3de4bddab0 Signed-off-by: Peter Zijlstra (Intel) Cc: # v4.3+ Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Fixes: fde7d22e01aa ("sched/fair: Fix overly small weight for interactive group entities") Signed-off-by: Ingo Molnar (cherry picked from commit 7dd4912594daf769a46744848b05bd5bc6d62469) Signed-off-by: Chris Redpath --- kernel/sched/fair.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e49408d0f590..df6fc28f80c7 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -830,8 +830,6 @@ void post_init_entity_util_avg(struct sched_entity *se) attach_entity_cfs_rq(se); } -static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq); -static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq); #else void init_entity_runnable_average(struct sched_entity *se) { -- 2.34.1