From: Valentin Schneider Date: Fri, 3 Mar 2017 11:43:03 +0000 (+0000) Subject: sched/fair: discount task contribution to find CPU with lowest utilization X-Git-Tag: release-20171130_firefly~4^2~100^2~28 X-Git-Url: http://demsky.eecs.uci.edu/git/?a=commitdiff_plain;h=619812e4cd6e76ff806edefbec05b01e0d91bf23;p=firefly-linux-kernel-4.4.55.git sched/fair: discount task contribution to find CPU with lowest utilization In some cases, the new_util of a task can be the same on several CPUs. This causes an issue because the target_util is only updated if the current new_util is strictly smaller than target_util. To fix that, the cpu_util_wake() return value is used alongside the new_util value. If two CPUs compute the same new_util value, we'll now also look at their cpu_util_wake() return value. In this case, the CPU that last ran the task will be chosen in priority. Change-Id: Ia1ea2c4b3ec39621372c2f748862317d5b497723 Signed-off-by: Valentin Schneider --- diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 23e2b5f33ff6..fc4e2529fbd2 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6221,7 +6221,8 @@ static inline int find_best_target(struct task_struct *p, bool boosted, bool pre int i; for_each_cpu_and(i, tsk_cpus_allowed(p), sched_group_cpus(sg)) { - unsigned long cur_capacity, new_util; + unsigned long cur_capacity, new_util, wake_util; + unsigned long min_wake_util = ULONG_MAX; if (!cpu_online(i)) continue; @@ -6231,7 +6232,8 @@ static inline int find_best_target(struct task_struct *p, bool boosted, bool pre * so prev_cpu will receive a negative bias due to the double * accounting. However, the blocked utilization may be zero. */ - new_util = cpu_util_wake(i, p) + task_util(p); + wake_util = cpu_util_wake(i, p); + new_util = wake_util + task_util(p); /* * Ensure minimum capacity to grant the required boost. @@ -6266,8 +6268,15 @@ static inline int find_best_target(struct task_struct *p, bool boosted, bool pre * Find a target cpu with the lowest/highest * utilization if prefer_idle/!prefer_idle. */ - if ((prefer_idle && target_util > new_util) || - (!prefer_idle && target_util < new_util)) { + if (prefer_idle) { + /* Favor the CPU that last ran the task */ + if (new_util > target_util || + wake_util > min_wake_util) + continue; + min_wake_util = wake_util; + target_util = new_util; + target_cpu = i; + } else if (target_util < new_util) { target_util = new_util; target_cpu = i; }