x86/asm/tsc, x86/cpu/amd: Use the full 64-bit TSC to detect the 2.6.2 bug
authorAndy Lutomirski <luto@kernel.org>
Thu, 25 Jun 2015 16:44:01 +0000 (18:44 +0200)
committerIngo Molnar <mingo@kernel.org>
Mon, 6 Jul 2015 13:23:27 +0000 (15:23 +0200)
This code is timing 100k indirect calls, so the added overhead
of counting the number of cycles elapsed as a 64-bit number
should be insignificant.  Drop the optimization of using a
32-bit count.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Huang Rui <ray.huang@amd.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Len Brown <lenb@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kvm ML <kvm@vger.kernel.org>
Link: http://lkml.kernel.org/r/d58f339a9c0dd8352b50d2f7a216f67ec2844f20.1434501121.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/kernel/cpu/amd.c

index dd3a4baffe50cca6595a57755e17c7d284ee999c..a69710db6112f7b01b33459a3693e54778457e5a 100644 (file)
@@ -114,7 +114,7 @@ static void init_amd_k6(struct cpuinfo_x86 *c)
                const int K6_BUG_LOOP = 1000000;
                int n;
                void (*f_vide)(void);
-               unsigned long d, d2;
+               u64 d, d2;
 
                printk(KERN_INFO "AMD K6 stepping B detected - ");
 
@@ -125,10 +125,10 @@ static void init_amd_k6(struct cpuinfo_x86 *c)
 
                n = K6_BUG_LOOP;
                f_vide = vide;
-               rdtscl(d);
+               d = native_read_tsc();
                while (n--)
                        f_vide();
-               rdtscl(d2);
+               d2 = native_read_tsc();
                d = d2-d;
 
                if (d > 20*K6_BUG_LOOP)