ARM: 7150/1: Allow kernel unaligned accesses on ARMv6+ processors
authorCatalin Marinas <catalin.marinas@arm.com>
Mon, 7 Nov 2011 17:05:53 +0000 (18:05 +0100)
committer黄涛 <huangtao@rock-chips.com>
Tue, 29 May 2012 02:06:02 +0000 (10:06 +0800)
commit 8428e84d42179c2a00f5f6450866e70d802d1d05 upstream.

Recent gcc versions generate unaligned accesses by default on ARMv6 and
later processors. This patch ensures that the SCTLR.A bit is always
cleared on such processors to avoid kernel traping before
alignment_init() is called.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: John Linn <John.Linn@xilinx.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
arch/arm/kernel/head.S

index 5a9c6cace9a629f6c7df1cb4e15c58d4a540a967..453814097c1dbc346da2bde784bffc9faa37ce27 100644 (file)
@@ -353,7 +353,7 @@ __secondary_data:
  *  r13 = *virtual* address to jump to upon completion
  */
 __enable_mmu:
-#ifdef CONFIG_ALIGNMENT_TRAP
+#if defined(CONFIG_ALIGNMENT_TRAP) && __LINUX_ARM_ARCH__ < 6
        orr     r0, r0, #CR_A
 #else
        bic     r0, r0, #CR_A