Nicolas Pitre [Wed, 28 Nov 2012 23:48:19 +0000 (18:48 -0500)]
ARM: GIC: interface to send a SGI directly
The regular gic_raise_softirq() takes as input a CPU mask which is not
adequate when we need to send an IPI to a CPU which is not represented
in the kernel to GIC mapping. That is the case with the b.L switcher
when GIC migration to the inbound CPU has not yet occurred.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
(cherry picked from commit
14d2ca615a85e2dbc744c12c296affd35f119fa7)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Nicolas Pitre [Wed, 28 Nov 2012 23:17:25 +0000 (18:17 -0500)]
ARM: GIC: function to retrieve the physical address of the SGIR
In order to have early assembly code signal other CPUs in the system,
we need to get the physical address for the SGIR register used to
send IPIs. Because the register will be used with a precomputed CPU
interface ID number, there is no need for any locking in the assembly
code where this register is written to.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
(cherry picked from commit
eeb446581ba23a5a36b4f5c7bfa2b1f8f7c9fb66)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Nicolas Pitre [Wed, 20 Mar 2013 03:59:04 +0000 (23:59 -0400)]
drivers: irq-chip: irq-gic: introduce gic_cpu_if_down()
When processors are about to hit low power states, the assertion of
standbywfi signal, triggered by the wfi instruction, is essential to
entering low power modes. If an IRQ is pending on the processor at the
time wfi is issued, the wfi instruction completes and the processor
restarts execution without asserting the standbywfi signal. Depending
on the platform power controller HW this behaviour can be acceptable or
not; if this behaviour must be prevented software should be provided
with a way to disable the routing of interrupts to the core IRQ pins.
On systems where raw GIC distributor interrupts are connected to the power
controller as wake-up events (hence the power controller still senses
IRQs and can wake up cores upon IRQ pending), the GIC CPU interface can
be disabled on power down, so that the GIC CPU IF output is gated and wfi
cannot complete, thereby preventing the standbywfi issue.
This patch adds a simple function to the GIC driver that allows to
disable the GIC CPU IF from power down procedures.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
[rewrote commit log]
Signed-off-by: Olof Johansson <olof@lixom.net>
(cherry picked from commit
10d9eb8a17cfb697967928bde06f3e7e530b03ac)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Christoffer Dall [Thu, 2 Oct 2014 07:29:59 +0000 (09:29 +0200)]
ARM: bL_switcher: do not hardcode GIC IDs in the code
Currently, GIC IDs are hardcoded making the code dependent on the 4+4 b.L
configuration. Let's allow for GIC IDs to be discovered upon switcher
initialization to support other b.L configurations such as the 1+1 one,
or 2+3 as on the VExpress TC2.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
(cherry picked from commit
ed96762e3241f57aa812977cf1920d3ee0363f4d)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Nicolas Pitre [Thu, 12 Apr 2012 05:40:31 +0000 (01:40 -0400)]
ARM: gic: add CPU migration support
This is required by the big.LITTLE switcher code.
The gic_migrate_target() changes the CPU interface mapping for the
current CPU to redirect SGIs to the specified interface, and it also
updates the target CPU for each interrupts to that CPU interface
if they were targeting the current interface. Finally, pending
SGIs for the current CPU are forwarded to the new interface.
Because Linux does not use it, the SGI source information for the
forwarded SGIs is not preserved. Neither is the source information
for the SGIs sent by the current CPU to other CPUs adjusted to match
the new CPU interface mapping. The required registers are banked so
only the target CPU could do it.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
(cherry picked from commit
1a6b69b6548cd0dd82549393f30dd982ceeb79d2)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Marek Szyprowski [Fri, 28 Feb 2014 13:42:55 +0000 (14:42 +0100)]
arm64: add support for reserved memory defined by device tree
Enable reserved memory initialization from device tree.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
(cherry picked from commit
9bf14b7c540ae9ca7747af3a0c0d8470ef77b6ce)
Signed-off-by: Mark Brown <broonie@kernel.org>
Mark Brown [Sat, 13 Sep 2014 17:23:07 +0000 (10:23 -0700)]
Merge remote-tracking branch 'lsk/v3.10/topic/libfdt' into lsk-v3.10-arm64-misc
Conflicts:
drivers/of/fdt.c
Will Deacon [Tue, 4 Feb 2014 12:29:12 +0000 (12:29 +0000)]
arm64: atomics: fix use of acquire + release for full barrier semantics
Linux requires a number of atomic operations to provide full barrier
semantics, that is no memory accesses after the operation can be
observed before any accesses up to and including the operation in
program order.
On arm64, these operations have been incorrectly implemented as follows:
// A, B, C are independent memory locations
<Access [A]>
// atomic_op (B)
1: ldaxr x0, [B] // Exclusive load with acquire
<op(B)>
stlxr w1, x0, [B] // Exclusive store with release
cbnz w1, 1b
<Access [C]>
The assumption here being that two half barriers are equivalent to a
full barrier, so the only permitted ordering would be A -> B -> C
(where B is the atomic operation involving both a load and a store).
Unfortunately, this is not the case by the letter of the architecture
and, in fact, the accesses to A and C are permitted to pass their
nearest half barrier resulting in orderings such as Bl -> A -> C -> Bs
or Bl -> C -> A -> Bs (where Bl is the load-acquire on B and Bs is the
store-release on B). This is a clear violation of the full barrier
requirement.
The simple way to fix this is to implement the same algorithm as ARMv7
using explicit barriers:
<Access [A]>
// atomic_op (B)
dmb ish // Full barrier
1: ldxr x0, [B] // Exclusive load
<op(B)>
stxr w1, x0, [B] // Exclusive store
cbnz w1, 1b
dmb ish // Full barrier
<Access [C]>
but this has the undesirable effect of introducing *two* full barrier
instructions. A better approach is actually the following, non-intuitive
sequence:
<Access [A]>
// atomic_op (B)
1: ldxr x0, [B] // Exclusive load
<op(B)>
stlxr w1, x0, [B] // Exclusive store with release
cbnz w1, 1b
dmb ish // Full barrier
<Access [C]>
The simple observations here are:
- The dmb ensures that no subsequent accesses (e.g. the access to C)
can enter or pass the atomic sequence.
- The dmb also ensures that no prior accesses (e.g. the access to A)
can pass the atomic sequence.
- Therefore, no prior access can pass a subsequent access, or
vice-versa (i.e. A is strictly ordered before C).
- The stlxr ensures that no prior access can pass the store component
of the atomic operation.
The only tricky part remaining is the ordering between the ldxr and the
access to A, since the absence of the first dmb means that we're now
permitting re-ordering between the ldxr and any prior accesses.
From an (arbitrary) observer's point of view, there are two scenarios:
1. We have observed the ldxr. This means that if we perform a store to
[B], the ldxr will still return older data. If we can observe the
ldxr, then we can potentially observe the permitted re-ordering
with the access to A, which is clearly an issue when compared to
the dmb variant of the code. Thankfully, the exclusive monitor will
save us here since it will be cleared as a result of the store and
the ldxr will retry. Notice that any use of a later memory
observation to imply observation of the ldxr will also imply
observation of the access to A, since the stlxr/dmb ensure strict
ordering.
2. We have not observed the ldxr. This means we can perform a store
and influence the later ldxr. However, that doesn't actually tell
us anything about the access to [A], so we've not lost anything
here either when compared to the dmb variant.
This patch implements this solution for our barriered atomic operations,
ensuring that we satisfy the full barrier requirements where they are
needed.
Cc: <stable@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
8e86f0b409a44193f1587e87b69c5dcf8f65be67)
Signed-off-by: Mark Brown <broonie@linaro.org>
Steve Capper [Wed, 2 Jul 2014 10:46:23 +0000 (11:46 +0100)]
arm64: mm: Make icache synchronisation logic huge page aware
The __sync_icache_dcache routine will only flush the dcache for the
first page of a compound page, potentially leading to stale icache
data residing further on in a hugetlb page.
This patch addresses this issue by taking into consideration the
order of the page when flushing the dcache.
Reported-by: Mark Brown <broonie@linaro.org>
Tested-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: <stable@vger.kernel.org> # v3.11+
(cherry picked from commit
923b8f5044da753e4985ab15c1374ced2cdf616c)
Signed-off-by: Mark Brown <broonie@linaro.org>
Catalin Marinas [Mon, 9 Jun 2014 10:55:03 +0000 (11:55 +0100)]
arm64: Fix barriers used for page table modifications
The architecture specification states that both DSB and ISB are required
between page table modifications and subsequent memory accesses using the
corresponding virtual address. When TLB invalidation takes place, the
tlb_flush_* functions already have the necessary barriers. However, there are
other functions like create_mapping() for which this is not the case.
The patch adds the DSB+ISB instructions in the set_pte() function for
valid kernel mappings. The invalid pte case is handled by tlb_flush_*
and the user mappings in general have a corresponding update_mmu_cache()
call containing a DSB. Even when update_mmu_cache() isn't called, the
kernel can still cope with an unlikely spurious page fault by
re-executing the instruction.
In addition, the set_pmd, set_pud() functions gain an ISB for
architecture compliance when block mappings are created.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Steve Capper <steve.capper@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
(cherry picked from commit
54d6ba0ede61f12b2a03d74bdbf004719a9cfefc)
Signed-off-by: Mark Brown <broonie@linaro.org>
Steve Capper [Fri, 2 May 2014 13:49:00 +0000 (14:49 +0100)]
arm64: mm: Optimise tlb flush logic where we have >4K granule
The tlb maintainence functions: __cpu_flush_user_tlb_range and
__cpu_flush_kern_tlb_range do not take into consideration the page
granule when looping through the address range, and repeatedly flush
tlb entries for the same page when operating with 64K pages.
This patch re-works the logic s.t. we instead advance the loop by
1 << (PAGE_SHIFT - 12), so avoid repeating ourselves.
Also the routines have been converted from assembler to static inline
functions to aid with legibility and potential compiler optimisations.
The isb() has been removed from flush_tlb_kernel_range(.) as it is
only needed when changing the execute permission of a mapping. If one
needs to set an area of the kernel as execute/non-execute an isb()
must be inserted after the call to flush_tlb_kernel_range.
Cc: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
fa48e6f780a681cdbc7820e33259edfe1a79b9e3)
Signed-off-by: Mark Brown <broonie@linaro.org>
Matthew Leach [Wed, 25 Sep 2013 15:33:13 +0000 (16:33 +0100)]
arm64: use correct register width when retrieving ASID
The ASID is represented as an unsigned int in mm_context_t and we
currently use the mmid assembler macro to access this element of the
struct. This should be accessed with a register of 32-bit width. If
the incorrect register width is used the ASID will be returned in
bits[32:63] of the register when running under big-endian.
Fix a use of the mmid macro in tlb.S to use a 32-bit access.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
fc18047c732f6becba92618a397555927687efd3)
Signed-off-by: Mark Brown <broonie@linaro.org>
Mark Salter [Tue, 4 Feb 2014 16:11:10 +0000 (11:11 -0500)]
lib: add fdt_empty_tree.c
CONFIG_LIBFDT support does not include fdt_empty_tree.c which is
needed by arm64 EFI stub. Add it to libfdt_files.
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
(cherry picked from commit
adaf5687846c25613d58c0a2f5d9e024547cdbec)
Signed-off-by: Mark Brown <broonie@linaro.org>
Rob Herring [Wed, 2 Apr 2014 20:10:14 +0000 (15:10 -0500)]
of/fdt: Convert FDT functions to use libfdt
The kernel FDT functions predate libfdt and are much more limited in
functionality. Also, the kernel functions and libfdt functions are
not compatible with each other because they have different definitions
of node offsets. To avoid this incompatibility and in preparation to
add more FDT parsing functions which will need libfdt, let's first
convert the existing code to use libfdt.
The FDT unflattening, top-level FDT scanning, and property retrieval
functions are converted to use libfdt. The scanning code should be
re-worked to be more efficient and understandable by using libfdt to
find nodes directly by path or compatible strings.
Signed-off-by: Rob Herring <robh@kernel.org>
Tested-by: Michal Simek <michal.simek@xilinx.com>
Tested-by: Grant Likely <grant.likely@linaro.org>
Tested-by: Stephen Chivers <schivers@csc.com>
(cherry picked from commit
e6a6928c3ea1d0195ed75a091e345696b916c09b)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
drivers/of/fdt.c
Mark Brown [Thu, 24 Jul 2014 20:06:21 +0000 (21:06 +0100)]
of/fdt: update of_get_flat_dt_prop in prep for libfdt
Make of_get_flat_dt_prop arguments compatible with libfdt fdt_getprop
call in preparation to convert FDT code to use libfdt. Make the return
value const and the property length ptr type an int.
Signed-off-by: Rob Herring <robh@kernel.org>
Tested-by: Michal Simek <michal.simek@xilinx.com>
Tested-by: Grant Likely <grant.likely@linaro.org>
Tested-by: Stephen Chivers <schivers@csc.com>
(cherry picked from commit
9d0c4dfedd96ee54fc075b16d02f82499c8cc3a6)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arc/kernel/devtree.c
arch/arm/kernel/devtree.c
arch/arm/mach-exynos/exynos.c
arch/arm/plat-samsung/s5p-dev-mfc.c
arch/powerpc/kernel/epapr_paravirt.c
arch/powerpc/kernel/prom.c
arch/powerpc/mm/hash_utils_64.c
arch/powerpc/platforms/powernv/opal.c
arch/xtensa/kernel/setup.c
drivers/of/fdt.c
Rob Herring [Sat, 29 Mar 2014 19:14:17 +0000 (14:14 -0500)]
of/fdt: remove unused of_scan_flat_dt_by_path
of_scan_flat_dt_by_path is unused anywhere in the kernel, so remove it.
Signed-off-by: Rob Herring <robh@kernel.org>
Tested-by: Michal Simek <michal.simek@xilinx.com>
Tested-by: Grant Likely <grant.likely@linaro.org>
Tested-by: Stephen Chivers <schivers@csc.com>
(cherry picked from commit
bba04d965d06abbbe10afd3687742389107e198e)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
drivers/of/fdt.c
Xiubo Li [Tue, 8 Apr 2014 05:48:07 +0000 (13:48 +0800)]
of: Fix the section mismatch warnings.
In tag next-
20140407, building with CONFIG_DEBUG_SECTION_MISMATCH
enabled, the following WARNING is occured:
WARNING: drivers/built-in.o(.text.unlikely+0x2220): Section mismatch
in reference from the function __reserved_mem_check_root() to the
function .init.text:of_get_flat_dt_prop()
The function __reserved_mem_check_root() references
the function __init of_get_flat_dt_prop().
This is often because __reserved_mem_check_root lacks a __init
annotation or the annotation of of_get_flat_dt_prop is wrong.
WARNING: vmlinux.o(.text.unlikely+0xb9d0): Section mismatch in reference
from the function __reserved_mem_check_root() to the (unknown reference)
.init.data:(unknown)
The function __reserved_mem_check_root() references
the (unknown reference) __initdata (unknown).
This is often because __reserved_mem_check_root lacks a __initdata
annotation or the annotation of (unknown) is wrong.
This is cause by :
'drivers: of: add initialization code for dynamic reserved memory'.
Signed-off-by: Xiubo Li <Li.Xiubo@freescale.com>
Signed-off-by: Rob Herring <robh@kernel.org>
(cherry picked from commit
5b6241185e2cded07ca3f5f646b55c641928ba4e)
Signed-off-by: Mark Brown <broonie@linaro.org>
Josh Cartwright [Thu, 13 Mar 2014 21:36:36 +0000 (16:36 -0500)]
of: only scan for reserved mem when fdt present
When the reserved memory patches hit -next, several legacy (non-DT) boot
failures were detected and bisected down to that commit. There needs to
be some sanity checking whether a DT is even present before parsing the
reserved ranges.
Reported-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Josh Cartwright <joshc@codeaurora.org>
Tested-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
(cherry picked from commit
2040b52768ebab6e7bd73af0dc63703269c62f17)
Signed-off-by: Mark Brown <broonie@linaro.org>
Marek Szyprowski [Fri, 28 Feb 2014 13:42:49 +0000 (14:42 +0100)]
drivers: of: add support for custom reserved memory drivers
Add support for custom reserved memory drivers. Call their init() function
for each reserved region and prepare for using operations provided by them
with by the reserved_mem->ops array.
Based on previous code provided by Josh Cartwright <joshc@codeaurora.org>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
(cherry picked from commit
f618c4703a14672d27bc2ca5d132a844363d6f5f)
Signed-off-by: Mark Brown <broonie@linaro.org>
Marek Szyprowski [Fri, 28 Feb 2014 13:42:48 +0000 (14:42 +0100)]
drivers: of: add initialization code for dynamic reserved memory
This patch adds support for dynamically allocated reserved memory regions
declared in device tree. Such regions are defined by 'size', 'alignment'
and 'alloc-ranges' properties.
Based on previous code provided by Josh Cartwright <joshc@codeaurora.org>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
(cherry picked from commit
3f0c8206644836e4f10a6b9fc47cda6a9a372f9b)
Signed-off-by: Mark Brown <broonie@linaro.org>
Marek Szyprowski [Fri, 28 Feb 2014 13:42:47 +0000 (14:42 +0100)]
drivers: of: add initialization code for static reserved memory
This patch adds support for static (defined by 'reg' property) reserved
memory regions declared in device tree.
Memory blocks can be reliably reserved only during early boot. This must
happen before the whole memory management subsystem is initialized,
because we need to ensure that the given contiguous blocks are not yet
allocated by kernel. Also it must happen before kernel mappings for the
whole low memory are created, to ensure that there will be no mappings
(for reserved blocks). Typically, all this happens before device tree
structures are unflattened, so we need to get reserved memory layout
directly from fdt.
Based on previous code provided by Josh Cartwright <joshc@codeaurora.org>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
(cherry picked from commit
e8d9d1f5485b52ec3c4d7af839e6914438f6c285)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
drivers/of/fdt.c
include/linux/of_fdt.h
Marek Szyprowski [Mon, 26 Aug 2013 12:41:56 +0000 (14:41 +0200)]
drivers: of: add function to scan fdt nodes given by path
Add a function to scan the flattened device-tree starting from the
node given by the path. It is used to extract information (like reserved
memory), which is required on early boot before we can unflatten the tree.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Tomasz Figa <t.figa@samsung.com>
Reviewed-by: Rob Herring <rob.herring@calxeda.com>
(cherry picked from commit
57d74bcf3072b65bde5aa540cedc976a75c48e5c)
Signed-off-by: Mark Brown <broonie@linaro.org>
Sascha Hauer [Mon, 5 Aug 2013 12:40:44 +0000 (14:40 +0200)]
OF: Add helper for matching against linux,stdout-path
devicetrees may have a linux,stdout-path property in the chosen
node describing the console device. This adds a helper function
to match a device against this property so a driver can call
add_preferred_console for a matching device.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit
5c19e95216b93b0d29c6a4887e69a980edc6fc81)
Signed-off-by: Mark Brown <broonie@linaro.org>
Santosh Shilimkar [Mon, 1 Jul 2013 18:20:35 +0000 (14:20 -0400)]
of: Specify initrd location using 64-bit
On some PAE architectures, the entire range of physical memory could reside
outside the 32-bit limit. These systems need the ability to specify the
initrd location using 64-bit numbers.
This patch globally modifies the early_init_dt_setup_initrd_arch() function to
use 64-bit numbers instead of the current unsigned long.
There has been quite a bit of debate about whether to use u64 or phys_addr_t.
It was concluded to stick to u64 to be consistent with rest of the device
tree code. As summarized by Geert, "The address to load the initrd is decided
by the bootloader/user and set at that point later in time. The dtb should not
be tied to the kernel you are booting"
More details on the discussion can be found here:
https://lkml.org/lkml/2013/6/20/690
https://lkml.org/lkml/2012/9/13/544
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Acked-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
(cherry picked from commit
374d5c9964c10373ba39bbe934f4262eb87d7114)
Signed-off-by: Mark Brown <broonie@linaro.org>
Marc Zyngier [Fri, 7 Dec 2012 18:35:41 +0000 (18:35 +0000)]
arm64: KVM: define HYP and Stage-2 translation page flags
Add HYP and S2 page flags, for both normal and device memory.
Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit
363116073a26dbc2903d8417047597eebcc05273)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/include/asm/pgtable-hwdef.h
arch/arm64/include/asm/pgtable.h
Dave Anderson [Tue, 15 Apr 2014 17:53:24 +0000 (18:53 +0100)]
arm64: Fix for the arm64 kern_addr_valid() function
Fix for the arm64 kern_addr_valid() function to recognize
virtual addresses in the kernel logical memory map. The
function fails as written because it does not check whether
the addresses in that region are mapped at the pmd level to
2MB or 512MB pages, continues the page table walk to the
pte level, and issues a garbage value to pfn_valid().
Tested on 4K-page and 64K-page kernels.
Signed-off-by: Dave Anderson <anderson@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
da6e4cb67c6dd1f72257c0a4a97c26dc4e80d3a7)
Signed-off-by: Mark Brown <broonie@linaro.org>
Catalin Marinas [Thu, 3 Apr 2014 14:57:15 +0000 (15:57 +0100)]
arm64: Clean up the default pgprot setting
The primary aim of this patchset is to remove the pgprot_default and
prot_sect_default global variables and rely strictly on predefined
values. The original goal was to be able to run SMP kernels on UP
hardware by not setting the Shareability bit. However, it is unlikely to
see UP ARMv8 hardware and even if we do, the Shareability bit is no
longer assumed to disable cacheable accesses.
A side effect is that the device mappings now have the Shareability
attribute set. The hardware, however, should ignore it since Device
accesses are always Outer Shareable.
Following the removal of the two global variables, there is some PROT_*
macro reshuffling and cleanup, including the __PAGE_* macros (replaced
by PAGE_*).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit
a501e32430d4232012ab708b8f0ce841f29e0f02)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/include/asm/io.h
arch/arm64/include/asm/pgtable.h
arch/arm64/mm/mmu.c
Mark Salter [Wed, 12 Mar 2014 16:28:06 +0000 (12:28 -0400)]
arm64: Add function to create identity mappings
At boot time, before switching to a virtual UEFI memory map, firmware
expects UEFI memory and IO regions to be identity mapped whenever
kernel makes runtime services calls. The existing early boot code
creates an identity map of kernel text/data but this is not sufficient
for UEFI. This patch adds a create_id_mapping() function which reuses
the core code of the existing create_mapping().
Signed-off-by: Mark Salter <msalter@redhat.com>
[ Fixed error message formatting (%pa). ]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
(cherry picked from commit
d7ecbddf4caefbac1b99478dd2b679f83dfc2545)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/mm/mmu.c
Mark Rutland [Tue, 24 Jun 2014 15:51:35 +0000 (16:51 +0100)]
arm64: place initial page tables above the kernel
Currently we place swapper_pg_dir and idmap_pg_dir below the kernel
image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However,
bootloaders may use portions of this memory below the kernel and we do
not parse the memory reservation list until after the MMU has been
enabled. As such we may clobber some memory a bootloader wishes to have
preserved.
To enable the use of all of this memory by bootloaders (when the
required memory reservations are communicated to the kernel) it is
necessary to move our initial page tables elsewhere. As we currently
have an effectively unbound requirement for memory at the end of the
kernel image for .bss, we can place the page tables here.
This patch moves the initial page table to the end of the kernel image,
after the BSS. As they do not consist of any initialised data they will
be stripped from the kernel Image as with the BSS. The BSS clearing
routine is updated to stop at __bss_stop rather than _end so as to not
clobber the page tables, and memory reservations made redundant by the
new organisation are removed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
bd00cd5f8c8c3c282bb1e1eac6a6679a4f808091)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/mm/init.c
Catalin Marinas [Wed, 26 Mar 2014 18:25:55 +0000 (18:25 +0000)]
arm64: Relax the kernel cache requirements for boot
With system caches for the host OS or architected caches for guest OS we
cannot easily guarantee that there are no dirty or stale cache lines for
the areas of memory written by the kernel during boot with the MMU off
(therefore non-cacheable accesses).
This patch adds the necessary cache maintenance during boot and relaxes
the booting requirements.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
c218bca74eeafa2f8528b6bbb34d112075fcf40a)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/kernel/head.S
Matthew Leach [Fri, 11 Oct 2013 13:52:16 +0000 (14:52 +0100)]
arm64: head: create a new function for setting the boot_cpu_mode flag
Currently, the code for setting the __cpu_boot_mode flag is munged in
with el2_setup. This makes things difficult on a BE bringup as a
memory access has to have occurred before el2_setup which is the place
that we'd like to set the endianess on the current EL.
Create a new function for setting __cpu_boot_mode and have el2_setup
return the mode the CPU. Also define a new constant in virt.h,
BOOT_CPU_MODE_EL1, for readability.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
828e9834e9a5b7e61046aa3c5f603a4fecba2fb4)
Signed-off-by: Mark Brown <broonie@linaro.org>
Ian Campbell [Tue, 15 Jul 2014 07:38:08 +0000 (08:38 +0100)]
arm64: Align the kbuild output for VDSOL and VDSOA
Signed-off-by: Ian Campbell <ijc@hellion.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kbuild@vger.kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
ad789ba5f7086138461420d2156478d33fb61077)
Signed-off-by: Mark Brown <broonie@linaro.org>
Will Deacon [Wed, 9 Jul 2014 18:22:11 +0000 (19:22 +0100)]
arm64: vdso: put vdso datapage in a separate vma
The VDSO datapage doesn't need to be executable (no code there) or
CoW-able (the kernel writes the page, so a private copy is totally
useless).
This patch moves the datapage into its own VMA, identified as "[vvar]"
in /proc/<pid>/maps.
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
8715493852783358ef8656a0054a14bf822509cf)
Signed-off-by: Mark Brown <broonie@linaro.org>
Catalin Marinas [Tue, 15 Jul 2014 14:46:02 +0000 (15:46 +0100)]
arm64: Remove duplicate (SWAPPER|IDMAP)_DIR_SIZE definitions
Just keep the asm/page.h definition as this is included in vmlinux.lds.S
as well.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
(cherry picked from commit
b2f8c07bcb7d1a3575f41444d2d8048d0c922762)
Signed-off-by: Mark Brown <broonie@linaro.org>
Mark Rutland [Tue, 24 Jun 2014 15:51:34 +0000 (16:51 +0100)]
arm64: head.S: remove unnecessary function alignment
Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't
span any page boundary, which simplifies the idmap and spares us
requiring an additional page table to map half of the function. In
keeping with other important requirements in architecture code, this
fact is undocumented.
Additionally, as the function consists of three instructions totalling
12 bytes with no literal pool data, a smaller alignment of 16 bytes
would be sufficient.
This patch reduces the alignment to 16 bytes and documents the
underlying reason for the alignment. This reduces the required alignment
of the entire .head.text section from 64 bytes to 16 bytes, though it
may still be aligned to a larger value depending on TEXT_OFFSET.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
909a4069da65a5cfca8c968edf9f0d99f694d2f3)
Signed-off-by: Mark Brown <broonie@linaro.org>
Mark Salter [Tue, 17 Jun 2014 17:14:26 +0000 (18:14 +0100)]
arm64: export __cpu_{clear,copy}_user_page functions
The __cpu_clear_user_page() and __cpu_copy_user_page() functions
are not currently exported. This prevents modules from using
clear_user_page() and copy_user_page().
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
bec7cedc8a92bfe96d32febe72634b30c63896bd)
Signed-off-by: Mark Brown <broonie@linaro.org>
Vinayak Kale [Wed, 26 Mar 2014 12:19:06 +0000 (12:19 +0000)]
arm64: dts: Add more serial port nodes in APM X-Gene device tree
APM X-Gene Storm SoC supports 4 serial ports. This patch adds device nodes
for serial ports 1 to 3 (a device node for serial port 0 is already present
in the dts file).
This patch also sets the compatible property of serial nodes to "ns16550a".
Signed-off-by: Vinayak Kale <vkale@apm.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
457ced8458605f1935214289d44aabb80bf75756)
Signed-off-by: Mark Brown <broonie@linaro.org>
AKASHI Takahiro [Tue, 24 Sep 2013 09:00:50 +0000 (10:00 +0100)]
arm64: avoid multiple evaluation of ptr in get_user/put_user()
get_user() is defined as a function macro in arm64, and trace_get_user()
calls it as followed:
get_user(ch, ptr++);
Since the second parameter occurs twice in the definition, 'ptr++' is
unexpectedly evaluated twice and trace_get_user() will generate a bogus
string from user-provided one. As a result, some ftrace sysfs operations,
like "echo FUNCNAME > set_ftrace_filter," hit this case and eventually fail.
This patch fixes the issue both in get_user() and put_user().
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[catalin.marinas@arm.com: added __user type annotation and s/optr/__p/]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
1f65c13efef69b6dc908e588f91a133641d8475c)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/include/asm/uaccess.h
Stefano Stabellini [Thu, 8 May 2014 15:48:13 +0000 (15:48 +0000)]
arm64: introduce virt_to_pfn
virt_to_pfn has been defined in arch/arm/include/asm/memory.h by commit
e26a9e0 "ARM: Better virt_to_page() handling" and Xen has come to rely
on it. Introduce virt_to_pfn on arm64 too.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
1f53ba6e81749a420226e5502c49ab83ba85c81d)
Signed-off-by: Mark Brown <broonie@linaro.org>
Catalin Marinas [Fri, 16 May 2014 15:44:32 +0000 (16:44 +0100)]
Revert "arm64: Introduce execute-only page access permissions"
This reverts commit
bc07c2c6e9ed125d362af0214b6313dca180cb08.
While the aim is increased security for --x memory maps, it does not
protect against kernel level reads. Until SECCOMP is implemented for
arm64, revert this patch to avoid giving a false idea of execute-only
mappings.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
5a0fdfada3a2aa50d7b947a2e958bf00cbe0d830)
Signed-off-by: Mark Brown <broonie@linaro.org>
Alex Shi [Mon, 26 May 2014 08:31:57 +0000 (16:31 +0800)]
Revert "arm64: init: Move of_clk_init to time_init"
This reverts commit
638b6642b041f83802ea5d7ca68b45ce508bbc5c.
Since time is close to 14.05 release, we revert this commit for a
quick fix to clock missing bug on armv8:
[ 0.000000] Hierarchical RCU implementation.
[ 0.000000] NR_IRQS:64 nr_irqs:64 0
[ 0.000000] vexpress-osc: Failed to obtain config func for node
'/smb/motherboard/mcc/osc@1'!
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Catalin Marinas [Thu, 3 Apr 2014 14:57:15 +0000 (15:57 +0100)]
arm64: Clean up the default pgprot setting
The primary aim of this patchset is to remove the pgprot_default and
prot_sect_default global variables and rely strictly on predefined
values. The original goal was to be able to run SMP kernels on UP
hardware by not setting the Shareability bit. However, it is unlikely to
see UP ARMv8 hardware and even if we do, the Shareability bit is no
longer assumed to disable cacheable accesses.
A side effect is that the device mappings now have the Shareability
attribute set. The hardware, however, should ignore it since Device
accesses are always Outer Shareable.
Following the removal of the two global variables, there is some PROT_*
macro reshuffling and cleanup, including the __PAGE_* macros (replaced
by PAGE_*).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit
a501e32430d4232012ab708b8f0ce841f29e0f02)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/include/asm/io.h
arch/arm64/include/asm/pgtable.h
Mark Brown [Sat, 24 May 2014 13:04:44 +0000 (14:04 +0100)]
Merge remote-tracking branch 'lsk/v3.10/topic/arm64-dma' into lsk-v3.10-arm64-misc
Conflicts:
arch/arm64/Kconfig
arch/arm64/mm/dma-mapping.c
mm/Kconfig
Mark Salter [Mon, 7 Apr 2014 22:39:52 +0000 (15:39 -0700)]
arm64: add early_ioremap support
Add support for early IO or memory mappings which are needed before the
normal ioremap() is usable. This also adds fixmap support for permanent
fixed mappings such as that used by the earlyprintk device register
region.
Signed-off-by: Mark Salter <msalter@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Borislav Petkov <borislav.petkov@amd.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
bf4b558eba920a38f91beb5ee62a8ce2628c92f7)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/Kconfig
arch/arm64/mm/ioremap.c
Mark Salter [Mon, 7 Apr 2014 22:39:48 +0000 (15:39 -0700)]
mm: create generic early_ioremap() support
This patch creates a generic implementation of early_ioremap() support
based on the existing x86 implementation. early_ioremp() is useful for
early boot code which needs to temporarily map I/O or memory regions
before normal mapping functions such as ioremap() are available.
Some architectures have optional MMU. In the no-MMU case, the remap
functions simply return the passed in physical address and the unmap
functions do nothing.
Signed-off-by: Mark Salter <msalter@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Cc: Borislav Petkov <borislav.petkov@amd.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
9e5c33d7aeeef62e5fa7e74f94432685bd03026b)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
mm/Kconfig
mm/Makefile
Mark Salter [Thu, 23 Jan 2014 23:53:48 +0000 (15:53 -0800)]
add generic fixmap.h
Many architectures provide an asm/fixmap.h which defines support for
compile-time 'special' virtual mappings which need to be made before
paging_init() has run. This support is also used for early ioremap on
x86. Much of this support is identical across the architectures. This
patch consolidates all of the common bits into asm-generic/fixmap.h
which is intended to be included from arch/*/include/asm/fixmap.h.
Signed-off-by: Mark Salter <msalter@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jonas Bonn <jonas.bonn@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
d57c33c5daa4efa9e4d303bd0faf868080b532be)
Signed-off-by: Mark Brown <broonie@linaro.org>
Loc Ho [Wed, 14 May 2014 00:02:37 +0000 (10:02 +1000)]
arm64: add APM X-Gene SoC RTC DTS entry
This patch adds APM X-Gene SoC RTC DTS entry
Signed-off-by: Rameshwar Prasad Sahu <rsahu@apm.com>
Signed-off-by: Loc Ho <lho@apm.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Alessandro Zummo <a.zummo@towertech.it>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
(cherry picked from commit
7fe2f8776216e25ad7fdb22f3966177777c5022c)
Signed-off-by: Mark Brown <broonie@linaro.org>
Will Deacon [Fri, 2 May 2014 15:24:10 +0000 (16:24 +0100)]
arm64: barriers: make use of barrier options with explicit barriers
When calling our low-level barrier macros directly, we can often suffice
with more relaxed behaviour than the default "all accesses, full system"
option.
This patch updates the users of dsb() to specify the option which they
actually require.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
98f7685ee69f871ba991089cb9685f0da07517ea)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/kvm/sys_regs.c
Will Deacon [Wed, 30 Apr 2014 15:23:06 +0000 (16:23 +0100)]
arm64: xchg: prevent warning if return value is unused
Some users of xchg() don't bother using the return value, which results
in a compiler warning like the following (from kgdb):
In file included from linux/arch/arm64/include/asm/atomic.h:27:0,
from include/linux/atomic.h:4,
from include/linux/spinlock.h:402,
from include/linux/seqlock.h:35,
from include/linux/time.h:5,
from include/uapi/linux/timex.h:56,
from include/linux/timex.h:56,
from include/linux/sched.h:19,
from include/linux/pid_namespace.h:4,
from kernel/debug/debug_core.c:30:
kernel/debug/debug_core.c: In function ‘kgdb_cpu_enter’:
linux/arch/arm64/include/asm/cmpxchg.h:75:3: warning: value computed is not used [-Wunused-value]
((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
^
linux/arch/arm64/include/asm/atomic.h:132:30: note: in expansion of macro ‘xchg’
#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
kernel/debug/debug_core.c:504:4: note: in expansion of macro ‘atomic_xchg’
atomic_xchg(&kgdb_active, cpu);
^
This patch makes use of the same trick as we do for cmpxchg, by assigning
the return value to a dummy variable in the xchg() macro itself.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
e1dfda9ced9bea1413a736f0d578f8218a7788ec)
Signed-off-by: Mark Brown <broonie@linaro.org>
Bjorn Helgaas [Thu, 8 May 2014 21:13:47 +0000 (22:13 +0100)]
arm64: Make atomic64_t() return "long", not "long long"
arm64 sets CONFIG_64BIT=y and hence uses the "long counter" atomic64_t
definition from include/linux/types.h. Make atomic64_read() return "long",
not "long long".
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
ba6bf8c85cb0d263ca9a98ef6a76ab651a97c60b)
Signed-off-by: Mark Brown <broonie@linaro.org>
Catalin Marinas [Thu, 3 Apr 2014 15:17:32 +0000 (16:17 +0100)]
arm64: Introduce execute-only page access permissions
The ARMv8 architecture allows execute-only user permissions by clearing
the PTE_UXN and PTE_USER bits. The kernel, however, can still access
such page, so execute-only page permission does not protect against
read(2)/write(2) etc. accesses. Systems requiring such protection must
implement/enable features like SECCOMP.
This patch changes the arm64 __P100 and __S100 protection_map[] macros
to the new __PAGE_EXECONLY attributes. A side effect is that
pte_valid_user() no longer triggers for __PAGE_EXECONLY since PTE_USER
isn't set. To work around this, the check is done on the PTE_NG bit via
the pte_valid_ng() macro. VM_READ is also checked now for page faults.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
bc07c2c6e9ed125d362af0214b6313dca180cb08)
Signed-off-by: Mark Brown <broonie@linaro.org>
Catalin Marinas [Fri, 4 Apr 2014 14:42:16 +0000 (15:42 +0100)]
arm64: Remove the aux_context structure
This patch removes the aux_context structure (and the containing file)
to allow the placement of the _aarch64_ctx end magic based on the
context stored on the signal stack.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
0e0276d1e1dd063cd14ce377707970d0417a0792)
Signed-off-by: Mark Brown <broonie@linaro.org>
Catalin Marinas [Fri, 4 Apr 2014 10:49:05 +0000 (11:49 +0100)]
arm64: Remove boot thread synchronisation for spin-table release method
The synchronisation with the boot thread already happens in __cpu_up()
via wait_for_completion_timeout(). In addition, __cpu_up() calls are
protected by the cpu_add_remove_lock mutex and already serialised.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
6400111399e16a535231ebd76389c894ea1837ff)
Signed-off-by: Mark Brown <broonie@linaro.org>
Geert Uytterhoeven [Tue, 11 Mar 2014 10:23:39 +0000 (11:23 +0100)]
arm64: mm: Remove superfluous "the" in comment
Signed-off-by: Geert Uytterhoeven <geert+renesas@linux-m68k.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
(cherry picked from commit
aad9061bf37e05d29a2a94ae8fe1e12d8808a0dd)
Signed-off-by: Mark Brown <broonie@linaro.org>
Chanho Min [Mon, 14 Apr 2014 07:38:53 +0000 (08:38 +0100)]
arm64: init: Move of_clk_init to time_init
Clock providers should be initialized before clocksource_of_init.
If not, Clock source initialization can be fail to get the clock.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Chanho Min <chanho.min@lge.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
bc3ee18a7a57243721ecfd879319e3d2e882f289)
Signed-off-by: Mark Brown <broonie@linaro.org>
Leo Yan [Wed, 16 Apr 2014 12:26:35 +0000 (13:26 +0100)]
arm64: initialize spinlock for init_mm's context
ARM64 has defined the spinlock for init_mm's context, so need initialize
the spinlock structure; otherwise during the suspend flow it will dump
the info for spinlock's bad magic warning as below:
[ 39.084394] Disabling non-boot CPUs ...
[ 39.092871] BUG: spinlock bad magic on CPU#1, swapper/1/0
[ 39.092896] lock: init_mm+0x338/0x3e0, .magic:
00000000, .owner: <none>/-1, .owner_cpu: 0
[ 39.092907] CPU: 1 PID: 0 Comm: swapper/1 Tainted: G O 3.10.33 #125
[ 39.092912] Call trace:
[ 39.092927] [<
ffffffc000087e64>] dump_backtrace+0x0/0x16c
[ 39.092934] [<
ffffffc000087fe0>] show_stack+0x10/0x1c
[ 39.092947] [<
ffffffc000765334>] dump_stack+0x1c/0x28
[ 39.092953] [<
ffffffc0007653b8>] spin_dump+0x78/0x88
[ 39.092960] [<
ffffffc0007653ec>] spin_bug+0x24/0x34
[ 39.092971] [<
ffffffc000300a28>] do_raw_spin_lock+0x98/0x17c
[ 39.092979] [<
ffffffc00076cf08>] _raw_spin_lock_irqsave+0x4c/0x60
[ 39.092990] [<
ffffffc000094044>] set_mm_context+0x1c/0x6c
[ 39.092996] [<
ffffffc0000941c8>] __new_context+0x94/0x10c
[ 39.093007] [<
ffffffc0000d63d4>] idle_task_exit+0x104/0x1b0
[ 39.093014] [<
ffffffc00008d91c>] cpu_die+0x14/0x74
[ 39.093021] [<
ffffffc000084f74>] arch_cpu_idle_dead+0x8/0x14
[ 39.093030] [<
ffffffc0000e7f18>] cpu_startup_entry+0x1ec/0x258
[ 39.093036] [<
ffffffc00008d810>] secondary_start_kernel+0x114/0x124
Signed-off-by: Leo Yan <leoy@marvell.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
8f0712037b4ed63dfce844939ac9866054f15ca0)
Signed-off-by: Mark Brown <broonie@linaro.org>
Rob Herring [Fri, 18 Apr 2014 22:19:59 +0000 (17:19 -0500)]
arm64: enable FIX_EARLYCON_MEM kconfig
In order to support earlycon on arm64, we need to enable earlycon fixmap
support.
Signed-off-by: Rob Herring <robh@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit
92cc15fcb543a8ab9af5682a2011944e6f48fd4c)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/Kconfig
Catalin Marinas [Fri, 25 Apr 2014 14:31:45 +0000 (15:31 +0100)]
arm64: Use bus notifiers to set per-device coherent DMA ops
Recently, the default DMA ops have been changed to non-coherent for
alignment with 32-bit ARM platforms (and DT files). This patch adds bus
notifiers to be able to set the coherent DMA ops (with no cache
maintenance) for devices explicitly marked as coherent via the
"dma-coherent" DT property.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
6ecba8eb51b7d23fda66388a5420be7d8688b186)
Signed-off-by: Mark Brown <broonie@linaro.org>
Mark Salter [Mon, 7 Apr 2014 22:39:51 +0000 (15:39 -0700)]
arm64: initialize pgprot info earlier in boot
Presently, paging_init() calls init_mem_pgprot() to initialize pgprot
values used by macros such as PAGE_KERNEL, PAGE_KERNEL_EXEC, etc.
The new fixmap and early_ioremap support also needs to use these macros
before paging_init() is called. This patch moves the init_mem_pgprot()
call out of paging_init() and into setup_arch() so that pgprot_default
gets initialized in time for fixmap and early_ioremap.
Signed-off-by: Mark Salter <msalter@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Borislav Petkov <borislav.petkov@amd.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
0bf757c73d6612d3d279de3f61b35062aa9c8b1d)
Signed-off-by: Mark Brown <broonie@linaro.org>
Laura Abbott [Sat, 5 Apr 2014 00:30:50 +0000 (01:30 +0100)]
arm64: Add missing Kconfig for CONFIG_STRICT_DEVMEM
The Kconfig for CONFIG_STRICT_DEVMEM is missing despite being
used in mmap.c. Add it.
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
d253b4406df69fa7a74231769d6f6ad80dc33063)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/Kconfig.debug
Catalin Marinas [Fri, 28 Mar 2014 09:49:13 +0000 (09:49 +0000)]
Revert "arm64: virt: ensure visibility of __boot_cpu_mode"
This reverts commit
82b2f495fba338d1e3098dde1df54944a9c19751. The
__boot_cpu_mode variable is flushed in head.S after being written,
therefore the additional cache flushing is no longer required.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
0a997ecc08e0b551119c56d52a591d9e5b38a7cd)
Signed-off-by: Mark Brown <broonie@linaro.org>
Laura Abbott [Fri, 14 Mar 2014 19:52:24 +0000 (19:52 +0000)]
arm64: Support DMA_ATTR_WRITE_COMBINE
DMA_ATTR_WRITE_COMBINE is currently ignored. Set the pgprot
appropriately for non coherent opperations.
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
214fdbe74a096c3aeb7af81d7900e2ab966b10d6)
Signed-off-by: Mark Brown <broonie@linaro.org>
Laura Abbott [Fri, 14 Mar 2014 19:52:23 +0000 (19:52 +0000)]
arm64: Implement custom mmap functions for dma mapping
The current dma_ops do not specify an mmap function so maping
falls back to the default implementation. There are at least
two issues with using the default implementation:
1) The pgprot is always pgprot_noncached (strongly ordered)
memory even with coherent operations
2) dma_common_mmap calls virt_to_page on the remapped non-coherent
address which leads to invalid memory being mapped.
Fix both these issue by implementing a custom mmap function which
correctly accounts for remapped addresses and sets vm_pg_prot
appropriately.
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
[catalin.marinas@arm.com: replaced "arm64_" with "__" prefix for consistency]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
6e8d7968e92f7668a2a615773ad3940f0219dcbd)
Signed-off-by: Mark Brown <broonie@linaro.org>
Christopher Covington [Wed, 19 Mar 2014 16:29:37 +0000 (16:29 +0000)]
arm64: Fix __range_ok macro
Without this, the following scenario is incorrectly determined
to be invalid.
addr 0x7f_ffffe000 size 8192 addr_limit 0x80_00000000
This behavior was observed while trying to vmsplice the stack
as part of a CRIU dump of a process on a system started with the
norandmaps kernel parameter.
Signed-off-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
31b1e940c5d47ee1a01baeccfb1b2b8890822d1a)
Signed-off-by: Mark Brown <broonie@linaro.org>
Loc Ho [Fri, 14 Mar 2014 23:53:21 +0000 (17:53 -0600)]
arm64: Add APM X-Gene SoC AHCI SATA host controller DTS entries
This patch adds APM X-Gene SoC AHCI SATA host controller DTS entries.
Signed-off-by: Loc Ho <lho@apm.com>
Signed-off-by: Tuan Phan <tphan@apm.com>
Signed-off-by: Suman Tripathi <stripathi@apm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Tejun Heo <tj@kernel.org>
(cherry picked from commit
db8c0286d18c2d3eaec2c4da34767db0f4f6ffaa)
Signed-off-by: Mark Brown <broonie@linaro.org>
Loc Ho [Fri, 14 Mar 2014 23:53:18 +0000 (17:53 -0600)]
arm64: Add APM X-Gene SoC 15Gbps Multi-purpose PHY DTS entries
This patch adds the DTS entries for the APM X-Gene SoC 15Gbps Multi-purpose
PHY driver. The PHY for SATA controller 2 and 3 are enabled by default.
Signed-off-by: Loc Ho <lho@apm.com>
Signed-off-by: Tuan Phan <tphan@apm.com>
Signed-off-by: Suman Tripathi <stripathi@apm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Tejun Heo <tj@kernel.org>
(cherry picked from commit
71b70ee9350f239ea021bbb737771ebd5d02c020)
Signed-off-by: Mark Brown <broonie@linaro.org>
Will Deacon [Fri, 14 Mar 2014 17:47:05 +0000 (17:47 +0000)]
arm64: rwsem: use asm-generic rwsem implementation
asm-generic offers an atomic-add based rwsem implementation, which
can avoid the need for heavier, spinlock-based synchronisation on the
fast path.
This patch makes use of the optimised implementation for arm64 CPUs.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
c209f79940ac0c75ae8d2f503a2b9d86255e266c)
Signed-off-by: Mark Brown <broonie@linaro.org>
Will Deacon [Fri, 14 Mar 2014 17:47:04 +0000 (17:47 +0000)]
asm-generic: rwsem: de-PPCify rwsem.h
asm-generic/rwsem.h used to live under arch/powerpc. During its
liberation to common code, a few references to its former home where
preserved, in particular the definition of RWSEM_ACTIVE_MASK is
predicated on CONFIG_PPC64.
This patch updates the ifdefs and comments to architecturally neutral
versions.
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Richard Kuo <rkuo@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
e172800e5d3162f97d332b3745e3743ce150ec48)
Signed-off-by: Mark Brown <broonie@linaro.org>
Jingoo Han [Wed, 5 Mar 2014 05:35:45 +0000 (05:35 +0000)]
arm64: smp: make local symbol static
Make smp_spin_table_cpu_postboot() static, because this function
is used only in this file.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
7184659bed3090248e382d98a49a3c1bcfe11174)
Signed-off-by: Mark Brown <broonie@linaro.org>
Jingoo Han [Wed, 5 Mar 2014 05:34:32 +0000 (05:34 +0000)]
arm64: debug: make local symbols static
Make local symbols static, because these are used only in this
file.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
242c04bc4be959ae28618772e439c27e87a7d880)
Signed-off-by: Mark Brown <broonie@linaro.org>
Will Deacon [Mon, 10 Mar 2014 10:36:52 +0000 (10:36 +0000)]
arm64: barriers: add dmb barrier
Commit
8adbf57fc429 ("irqchip: gic: use dmb ishst instead of dsb when
raising a softirq") added an explicit dmb(...) call to the GIC driver.
This patch adds a simple dmb() macro to arm64, which expands to a DMB SY
instruction.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
d152d22a18c240286c19997a6249ee76ea055926)
Signed-off-by: Mark Brown <broonie@linaro.org>
Mark Rutland [Wed, 14 Aug 2013 08:54:54 +0000 (09:54 +0100)]
arm64: remove unnecessary cache flush at boot
Currently we flush the entire dcache at boot within __cpu_setup, but
this is unnecessary as the booting protocol demands that the dcache is
invalid and off upon entering the kernel. The presence of the cache
flush only serves to hide bugs in bootloaders, and is not safe in the
presence of SMP.
In an SMP boot scenario the CPUs enter coherency outside of the kernel,
and the primary CPU enables its caches before bringing up secondary
CPUs. Therefore if any secondary CPU has an entry in its cache (in
violation of the boot protocol), the primary CPU might snoop it even if
the secondary CPU's cache is disabled. The boot-time cache flush only
serves to hide a firmware bug, and slows down a cpu boot unnecessarily.
This patch removes the unnecessary boot-time cache flush.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
[catalin.marinas@arm.com: make __flush_dcache_all local only]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
bff705950e2cdcf35641dee35eb14bad9ed49e8f)
Signed-off-by: Mark Brown <broonie@linaro.org>
Catalin Marinas [Fri, 28 Feb 2014 16:12:25 +0000 (16:12 +0000)]
arm64: Fix !CONFIG_SMP kernel build
Commit
fb4a96029c8a (arm64: kernel: fix per-cpu offset restore on
resume) uses per_cpu_offset() unconditionally during CPU wakeup,
however, this is only defined for the SMP case.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Dave P Martin <Dave.Martin@arm.com>
(cherry picked from commit
b57fc9e80692043e2a3a74e1d2c047eb700dcd0c)
Signed-off-by: Mark Brown <broonie@linaro.org>
Vladimir Murzin [Fri, 28 Feb 2014 09:57:33 +0000 (09:57 +0000)]
arm64: remove return value form psci_init()
psci_init() is written to return err code if something goes wrong. However,
the single user, setup_arch(), doesn't care about it. Moreover, every error
path is supplied with a clear message which is enough for pleasant debugging.
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
64b4f60f497058f1c6ba118a0260249ee5c091a6)
Signed-off-by: Mark Brown <broonie@linaro.org>
Vladimir Murzin [Fri, 28 Feb 2014 09:57:47 +0000 (09:57 +0000)]
arm64: remove redundant "psci:" prefixes
Since
652af899799354049b273af897b798b8f03fdd88 "arm64: factor out spin-table
boot method" psci prefix's been introduced. We have a common pr_fmt, so clean
them up.
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
288ac26cc2334e5e6ecad6416e9bf750691afd84)
Signed-off-by: Mark Brown <broonie@linaro.org>
Nathan Lynch [Tue, 11 Feb 2014 22:28:42 +0000 (22:28 +0000)]
arm64: vdso: clean up vdso_pagelist initialization
Remove some unnecessary bits that were apparently carried over from
another architecture's implementation:
- No need to get_page() the vdso text/data - these are part of the
kernel image.
- No need for ClearPageReserved on the vdso text.
- No need to vmap the first text page to check the ELF header - this
can be done through &vdso_start.
Also some minor cleanup:
- Use kcalloc for vdso_pagelist array allocation.
- Don't print on allocation failure, slab/slub will do that for us.
Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
16fb1a9bec6126162560f159df449e4781560807)
Signed-off-by: Mark Brown <broonie@linaro.org>
Ritesh Harjani [Thu, 6 Feb 2014 11:51:51 +0000 (17:21 +0530)]
arm64: Change misleading function names in dma-mapping
arm64_swiotlb_alloc/free_coherent name can be misleading
somtimes with CMA support being enabled after this
patch (
c2104debc235b745265b64d610237a6833fd53)
Change this name to be more generic:
__dma_alloc/free_coherent
Signed-off-by: Ritesh Harjani <ritesh.harjani@gmail.com>
[catalin.marinas@arm.com: renamed arm64_swiotlb_dma_ops to coherent_swiotlb_dma_ops]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
bb10eb7b4d176f408d45fb492df28bed2981a1f3)
Signed-off-by: Mark Brown <broonie@linaro.org>
Geoff Levand [Tue, 17 Dec 2013 00:19:29 +0000 (00:19 +0000)]
arm64: Fix the soft_restart routine
Change the soft_restart() routine to call cpu_reset() at its identity mapped
physical address.
The cpu_reset() routine must be called at its identity mapped physical address
so that when the MMU is turned off the instruction pointer will be at the correct
location in physical memory.
Signed-off-by: Geoff Levand <geoff@infradead.org> for Huawei, Linaro
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
09024aa61e1bc994404683e2e5b363484a15dd12)
Signed-off-by: Mark Brown <broonie@linaro.org>
Catalin Marinas [Mon, 17 Feb 2014 12:03:25 +0000 (12:03 +0000)]
arm64: Extend the idmap to the whole kernel image
This patch changes the idmap page table creation during boot to cover
the whole kernel image, allowing functions like cpu_reset() to be safely
called with the physical address.
This patch also simplifies the create_block_map asm macro to no longer
take an idmap argument and always use the phys/virt/end parameters. For
the idmap case, phys == virt.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
ea8c2e1124457f266f82effc3e6558552527943a)
Signed-off-by: Mark Brown <broonie@linaro.org>
Vijaya Kumar K [Fri, 21 Feb 2014 05:13:49 +0000 (05:13 +0000)]
arm64: enable processor debug state for secondary cpus
processor debug state PSTATE.D is unmasked in smp call
clear_os_lock for secondary cpus. So debug state is still
masked in normal kernel context. With this patch, unmask
debug state on secondary boot for the cpus in normal kernel
context. Now kgdb tests passed with multicore.
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
d8ed442a009ecfe155b57d58f231db3d6084633d)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/kernel/smp.c
Mark Brown [Wed, 21 May 2014 17:39:39 +0000 (18:39 +0100)]
Merge remote-tracking branch 'lsk/v3.10/topic/arm64-kgdb' into lsk-v3.10-arm64-misc
Conflicts:
arch/arm64/include/asm/debug-monitors.h
arch/arm64/kernel/debug-monitors.c
Catalin Marinas [Tue, 4 Feb 2014 16:37:59 +0000 (16:37 +0000)]
arm64: Extend the PCI I/O space to 16MB
The patch moves the PCI I/O space (currently at 64K) before the
earlyprintk mapping and extends it to 16MB.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
22bd1c91fe13d59cff734b69b6757adcfbd8dee9)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
Documentation/arm64/memory.txt
Jiang Liu [Tue, 7 Jan 2014 14:17:12 +0000 (22:17 +0800)]
arm64, jump label: detect %c support for ARM64
As commit
a9468f30b5eac6 "ARM: 7333/2: jump label: detect %c
support for ARM", this patch detects the same thing for ARM64
because some ARM64 GCC versions have the same issue.
Some versions of ARM64 GCC which do support asm goto, do not
support the %c specifier. Since we need the %c to support jump
labels on ARM64, detect that too in the asm goto detection script
to avoid build errors with these versions.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jiang Liu <liuj97@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
f3c003f72dfb2497056bcbb864885837a1968ed5)
Signed-off-by: Mark Brown <broonie@linaro.org>
Vijaya Kumar K [Tue, 28 Jan 2014 11:20:22 +0000 (11:20 +0000)]
arm64: KGDB: Add KGDB config
Add HAVE_ARCH_KGDB for arm64 Kconfig
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
9529247db9ecfc5a723e17093614e7437ab0d5bd)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/Kconfig
Vijaya Kumar K [Tue, 28 Jan 2014 11:20:21 +0000 (16:50 +0530)]
misc: debug: remove compilation warnings
typecast instruction_pointer macro to unsigned long to
resolve following compiler warnings like
warning: format '%lx' expects argument of type 'long unsigned int',
but argument 2 has type 'u64' [-Wformat]
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
58dcc204f18af2821f683b235bb376f9db2557f5)
Signed-off-by: Mark Brown <broonie@linaro.org>
Vijaya Kumar K [Tue, 28 Jan 2014 11:20:19 +0000 (11:20 +0000)]
arm64: KGDB: Add step debugging support
Add KGDB software step debugging support for EL1 debug
in AArch64 mode.
KGDB registers step debug handler with debug monitor.
On receiving 'step' command from GDB tool, target enables
software step debugging and step address is updated in ELR.
Software Step debugging is disabled when 'continue' command
is received
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
44679a4f142b69ae0c68ed815a48bbd164827281)
Signed-off-by: Mark Brown <broonie@linaro.org>
Vijaya Kumar K [Tue, 28 Jan 2014 11:20:18 +0000 (16:50 +0530)]
arm64: KGDB: Add Basic KGDB support
Add KGDB debug support for kernel debugging.
With this patch, basic KGDB debugging is possible.GDB register
layout is updated and GDB tool can establish connection with
target and can set/clear breakpoints.
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
bcf5763b0d58d20e288ac52f96cbd7788e262cac)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/kernel/Makefile
Vijaya Kumar K [Tue, 28 Jan 2014 11:20:17 +0000 (11:20 +0000)]
arm64: Add macros to manage processor debug state
Add macros to enable and disable to manage PSTATE.D
for debugging. The macros local_dbg_save and local_dbg_restore
are moved to irqflags.h file
KGDB boot tests fail because of PSTATE.D is masked.
unmask it for debugging support
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
c7db4ff5d2b459a579d348532a92fd5885520ce6)
Signed-off-by: Mark Brown <broonie@linaro.org>
Mark Rutland [Fri, 7 Feb 2014 17:12:45 +0000 (17:12 +0000)]
arm64: defconfig: Expand default enabled features
FPGA implementations of the Cortex-A57 and Cortex-A53 are now available
in the form of the SMM-A57 and SMM-A53 Soft Macrocell Models (SMMs) for
Versatile Express. As these attach to a Motherboard Express V2M-P1 it
would be useful to have support for some V2M-P1 peripherals enabled by
default.
Additionally a couple of of features have been introduced since the last
defconfig update (CMA, jump labels) that would be good to have enabled
by default to ensure they are build and boot tested.
This patch updates the arm64 defconfig to enable support for these
devices and features. The arm64 Kconfig is modified to select
HAVE_PATA_PLATFORM, which is required to enable support for the
CompactFlash controller on the V2M-P1.
A few options which don't need to appear in defconfig are trimmed:
* BLK_DEV - selected by default
* EXPERIMENTAL - otherwise gone from the kernel
* MII - selected by drivers which require it
* USB_SUPPORT - selected by default
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
55834a773fe343912b705bef8114ec93fd337188)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/configs/defconfig
Will Deacon [Tue, 4 Feb 2014 12:29:13 +0000 (12:29 +0000)]
arm64: asm: remove redundant "cc" clobbers
cbnz/tbnz don't update the condition flags, so remove the "cc" clobbers
from inline asm blocks that only use these instructions to implement
conditional branches.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
95c4189689f92fba7ecf9097173404d4928c6e9b)
Signed-off-by: Mark Brown <broonie@linaro.org>
Will Deacon [Thu, 6 Feb 2014 11:30:48 +0000 (11:30 +0000)]
arm64: barriers: allow dsb macro to take option parameter
The dsb instruction takes an option specifying both the target access
types and shareability domain.
This patch allows such an option to be passed to the dsb macro,
resulting in potentially more efficient code. Currently the option is
ignored until all callers are updated (unlike ARM, the option is
mandated by the assembler).
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
4a7ac12eedd190cdf071e61145defa73df1675c0)
Signed-off-by: Mark Brown <broonie@linaro.org>
Mark Rutland [Wed, 5 Feb 2014 10:24:13 +0000 (10:24 +0000)]
arm64: simplify pgd_alloc
Currently pgd_alloc has a redundant NULL check in its return path that
can be removed with no ill effects. With that removed it's also possible
to return early and eliminate the new_pgd temporary variable.
This patch applies said modifications, making the logic of pgd_alloc
correspond 1-1 with that of pgd_free.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
883d50a0ed403446437444a495356ce31e1197a3)
Signed-off-by: Mark Brown <broonie@linaro.org>
Mark Rutland [Wed, 5 Feb 2014 10:24:12 +0000 (10:24 +0000)]
arm64: fix typo: s/SERRROR/SERROR/
Somehow SERROR has acquired an additional 'R' in a couple of headers.
This patch removes them before they spread further. As neither instance
is in use yet, no other sites need to be fixed up.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
bfb67a5606376bb32cb6f93dc05cda2e8c2038a5)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/include/asm/kvm_arm.h
Jingoo Han [Mon, 27 Jan 2014 07:19:32 +0000 (07:19 +0000)]
arm64: mm: fix the function name in comment of cpu_do_switch_mm
Fix the function name of comment of cpu_do_switch_mm,
because cpu_do_switch_mm is the correct name.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
812944e91dbbfeadaeeb4443a5560a7f45648f0b)
Signed-off-by: Mark Brown <broonie@linaro.org>
Jingoo Han [Tue, 21 Jan 2014 01:17:47 +0000 (01:17 +0000)]
arm64: mm: fix the function name in comment of __flush_dcache_area
Fix the function name of comment of __flush_dcache_area,
because __flush_dcache_area is the correct name. Also,
the missing variable 'size' is added to the comment.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
03324e6e6e66ebd171d9b4b90fd6a2655980dc13)
Signed-off-by: Mark Brown <broonie@linaro.org>
Jingoo Han [Mon, 20 Jan 2014 05:00:21 +0000 (05:00 +0000)]
arm64: mm: use ubfm for dcache_line_size
Use 'ubfm' for the bitfield move instruction; thus, single
instruction can be used instead of two instructions, when
getting the minimum D-cache line size from CTR_EL0 register.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
bd5f6dc304a054ccdc8dab43bef5e41d9a575b61)
Signed-off-by: Mark Brown <broonie@linaro.org>
Geoff Levand [Sat, 14 Dec 2013 00:20:13 +0000 (00:20 +0000)]
arm64: Remove unused __data_loc variable
The __data_loc variable is an unused left over from the 32 bit arm implementation.
Remove that variable and adjust the __mmap_switched startup routine accordingly.
Signed-off-by: Geoff Levand <geoff@infradead.org> for Huawei, Linaro
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
b22cf637bbaf99d4caf9908997a32f91cdcfae52)
Signed-off-by: Mark Brown <broonie@linaro.org>
Liviu Dudau [Tue, 17 Dec 2013 18:19:46 +0000 (18:19 +0000)]
arm64: Remove outdated comment
Code referenced in the comment has moved to arch/arm64/kernel/cputable.c
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
81cac699440fc3707fd80f16bf34a7e506d41487)
Signed-off-by: Mark Brown <broonie@linaro.org>
Sandeepa Prabhu [Wed, 4 Dec 2013 05:50:20 +0000 (05:50 +0000)]
arm64: support single-step and breakpoint handler hooks
AArch64 Single Steping and Breakpoint debug exceptions will be
used by multiple debug framworks like kprobes & kgdb.
This patch implements the hooks for those frameworks to register
their own handlers for handling breakpoint and single step events.
Reworked the debug exception handler in entry.S: do_dbg to route
software breakpoint (BRK64) exception to do_debug_exception()
Signed-off-by: Sandeepa Prabhu <sandeepa.prabhu@linaro.org>
Signed-off-by: Deepak Saxena <dsaxena@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
ee6214cec7818867f368c35843ea1f3dffcbb57c)
Signed-off-by: Mark Brown <broonie@linaro.org>
Will Deacon [Sat, 16 Mar 2013 08:48:13 +0000 (08:48 +0000)]
arm64: debug: consolidate software breakpoint handlers
The software breakpoint handlers are hooked in directly from ptrace,
which makes it difficult to add additional handlers for things like
kprobes and kgdb.
This patch moves the handling code into debug-monitors.c, where we can
dispatch to different debug subsystems more easily.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
1442b6ed249d2b3d2cfcf45b65ac64393495c96c)
Signed-off-by: Mark Brown <broonie@linaro.org>