sched/numa: Use group's max nid as task's preferred nid
authorRik van Riel <riel@redhat.com>
Mon, 23 Jun 2014 15:41:29 +0000 (11:41 -0400)
committerIngo Molnar <mingo@kernel.org>
Sat, 5 Jul 2014 09:17:33 +0000 (11:17 +0200)
From task_numa_placement, always try to consolidate the tasks
in a group on the group's top nid.

In case this task is part of a group that is interleaved over
multiple nodes, task_numa_migrate will set the task's preferred
nid to the best node it could find for the task, so this patch
will cause at most one run through task_numa_migrate.

Signed-off-by: Rik van Riel <riel@redhat.com>
Cc: mgorman@suse.de
Cc: chegu_vinod@hp.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1403538095-31256-2-git-send-email-riel@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c

index e3ff3d1c47803d36aed86437fc168a3aa0d78007..96b2d3929a4e15611e755da83d6842e130b804d9 100644 (file)
@@ -1594,23 +1594,8 @@ static void task_numa_placement(struct task_struct *p)
 
        if (p->numa_group) {
                update_numa_active_node_mask(p->numa_group);
-               /*
-                * If the preferred task and group nids are different,
-                * iterate over the nodes again to find the best place.
-                */
-               if (max_nid != max_group_nid) {
-                       unsigned long weight, max_weight = 0;
-
-                       for_each_online_node(nid) {
-                               weight = task_weight(p, nid) + group_weight(p, nid);
-                               if (weight > max_weight) {
-                                       max_weight = weight;
-                                       max_nid = nid;
-                               }
-                       }
-               }
-
                spin_unlock_irq(group_lock);
+               max_nid = max_group_nid;
        }
 
        if (max_faults) {