sched: Enable idle balance to pull single task towards cpu with higher capacity
authorDietmar Eggemann <dietmar.eggemann@arm.com>
Mon, 26 Jan 2015 19:47:28 +0000 (19:47 +0000)
committerPunit Agrawal <punit.agrawal@arm.com>
Mon, 21 Mar 2016 12:34:30 +0000 (12:34 +0000)
We do not want to miss out on the ability to pull a single remaining
task from a potential source cpu towards an idle destination cpu. Add an
extra criteria to need_active_balance() to kick off active load balance
if the source cpu is over-utilized and has lower capacity than the
destination cpu.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
kernel/sched/fair.c

index bebc8367edee1b935c4dae13a5a7a18253675005..fe40cdb8e64024541415e6272badcb6d8837635f 100644 (file)
@@ -4779,6 +4779,11 @@ static inline bool task_fits_spare(struct task_struct *p, int cpu)
        return __task_fits(p, cpu, cpu_util(cpu));
 }
 
+static bool cpu_overutilized(int cpu)
+{
+       return (capacity_of(cpu) * 1024) < (cpu_util(cpu) * capacity_margin);
+}
+
 /*
  * find_idlest_group finds and returns the least busy CPU group within the
  * domain.
@@ -6995,6 +7000,13 @@ static int need_active_balance(struct lb_env *env)
                        return 1;
        }
 
+       if ((capacity_of(env->src_cpu) < capacity_of(env->dst_cpu)) &&
+                               env->src_rq->cfs.h_nr_running == 1 &&
+                               cpu_overutilized(env->src_cpu) &&
+                               !cpu_overutilized(env->dst_cpu)) {
+                       return 1;
+       }
+
        return unlikely(sd->nr_balance_failed > sd->cache_nice_tries+2);
 }