ARM: sched_clock: Load cycle count after epoch stabilizes
authorStephen Boyd <sboyd@codeaurora.org>
Mon, 17 Jun 2013 22:40:58 +0000 (15:40 -0700)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Tue, 16 Dec 2014 17:09:43 +0000 (09:09 -0800)
commit7f45ce8f415f1aed22af1d0a8c551053ad540c8b
tree311f8c1bc61016420ccdbbe66afdb49e1dd936f2
parente6a18c108e386866325b3b6e986f612ee512787c
ARM: sched_clock: Load cycle count after epoch stabilizes

commit 336ae1180df5f69b9e0fb6561bec01c5f64361cf upstream.

There is a small race between when the cycle count is read from
the hardware and when the epoch stabilizes. Consider this
scenario:

 CPU0                           CPU1
 ----                           ----
 cyc = read_sched_clock()
 cyc_to_sched_clock()
                                 update_sched_clock()
                                  ...
                                  cd.epoch_cyc = cyc;
  epoch_cyc = cd.epoch_cyc;
  ...
  epoch_ns + cyc_to_ns((cyc - epoch_cyc)

The cyc on cpu0 was read before the epoch changed. But we
calculate the nanoseconds based on the new epoch by subtracting
the new epoch from the old cycle count. Since epoch is most likely
larger than the old cycle count we calculate a large number that
will be converted to nanoseconds and added to epoch_ns, causing
time to jump forward too much.

Fix this problem by reading the hardware after the epoch has
stabilized.

Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/arm/kernel/sched_clock.c