commit
c1804d547dc098363443667609c272d1e4d15ee8 upstream
The previous patch preserved the retry logic, but it looks unneeded.
__migrate_task() can only fail if we raced with migration after we dropped
the lock, but in this case the caller of set_cpus_allowed/etc must initiate
migration itself if ->on_rq == T.
We already fixed p->cpus_allowed, the changes in active/online masks must
be visible to racer, it should migrate the task to online cpu correctly.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
20100315091014.GA9138@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
struct rq *rq = cpu_rq(dead_cpu);
int needs_cpu, uninitialized_var(dest_cpu);
unsigned long flags;
-again:
+
local_irq_save(flags);
spin_lock(&rq->lock);
if (needs_cpu)
dest_cpu = select_fallback_rq(dead_cpu, p);
spin_unlock(&rq->lock);
-
- /* It can have affinity changed while we were choosing. */
+ /*
+ * It can only fail if we race with set_cpus_allowed(),
+ * in the racer should migrate the task anyway.
+ */
if (needs_cpu)
- needs_cpu = !__migrate_task(p, dead_cpu, dest_cpu);
+ __migrate_task(p, dead_cpu, dest_cpu);
local_irq_restore(flags);
-
- if (unlikely(needs_cpu))
- goto again;
}
/*