x86/spinlock: Leftover conversion ACCESS_ONCE->READ_ONCE
authorChristian Borntraeger <borntraeger@de.ibm.com>
Tue, 6 Jan 2015 21:49:54 +0000 (22:49 +0100)
committerChristian Borntraeger <borntraeger@de.ibm.com>
Mon, 19 Jan 2015 13:14:20 +0000 (14:14 +0100)
commit 78bff1c8684f ("x86/ticketlock: Fix spin_unlock_wait() livelock")
introduced two additional ACCESS_ONCE cases in x86 spinlock.h.
Lets change those as well.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Oleg Nesterov <oleg@redhat.com>
arch/x86/include/asm/spinlock.h

index 625660f8a2fcf0cb4b4b1a9216908c98fffe00a1..7050d864f5207c4fb384672b083a18a6584bdbbe 100644 (file)
@@ -183,10 +183,10 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
-       __ticket_t head = ACCESS_ONCE(lock->tickets.head);
+       __ticket_t head = READ_ONCE(lock->tickets.head);
 
        for (;;) {
-               struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
+               struct __raw_tickets tmp = READ_ONCE(lock->tickets);
                /*
                 * We need to check "unlocked" in a loop, tmp.head == head
                 * can be false positive because of overflow.