perf_events: Improve task_sched_in()
authoreranian@google.com <eranian@google.com>
Thu, 11 Mar 2010 06:26:05 +0000 (22:26 -0800)
committerIngo Molnar <mingo@elte.hu>
Thu, 11 Mar 2010 14:23:28 +0000 (15:23 +0100)
This patch is an optimization in perf_event_task_sched_in() to avoid
scheduling the events twice in a row.

Without it, the perf_disable()/perf_enable() pair is invoked twice,
thereby pinned events counts while scheduling flexible events and we go
throuh hw_perf_enable() twice.

By encapsulating, the whole sequence into perf_disable()/perf_enable() we
ensure, hw_perf_enable() is going to be invoked only once because of the
refcount protection.

Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1268288765-5326-1-git-send-email-eranian@google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
kernel/perf_event.c

index 52c69a34d6975cb5b078083cf57a6cda9a7d5f5f..3853d49c7d56365d7df880017c55d1f2663a4f6f 100644 (file)
@@ -1368,6 +1368,8 @@ void perf_event_task_sched_in(struct task_struct *task)
        if (cpuctx->task_ctx == ctx)
                return;
 
+       perf_disable();
+
        /*
         * We want to keep the following priority order:
         * cpu pinned (that don't need to move), task pinned,
@@ -1380,6 +1382,8 @@ void perf_event_task_sched_in(struct task_struct *task)
        ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE);
 
        cpuctx->task_ctx = ctx;
+
+       perf_enable();
 }
 
 #define MAX_INTERRUPTS (~0ULL)