firefly-linux-kernel-4.4.55.git
9 years agoARM: kvm: rename cpu_reset to avoid name clash
Olof Johansson [Wed, 11 Sep 2013 22:27:41 +0000 (15:27 -0700)]
ARM: kvm: rename cpu_reset to avoid name clash

cpu_reset is already #defined in <asm/proc-fns.h> as processor.reset,
so it expands here and causes problems.

Cc: <stable@vger.kernel.org>
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit ac570e0493815e0b41681c89cb50d66421429d27)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agokvm: remove .done from struct kvm_async_pf
Radim Krčmář [Wed, 4 Sep 2013 20:32:24 +0000 (22:32 +0200)]
kvm: remove .done from struct kvm_async_pf

'.done' is used to mark the completion of 'async_pf_execute()', but
'cancel_work_sync()' returns true when the work was canceled, so we
use it instead.

Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 98fda169290b3b28c0f2db2b8f02290c13da50ef)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agokvm: free resources after canceling async_pf
Radim Krčmář [Wed, 4 Sep 2013 20:32:23 +0000 (22:32 +0200)]
kvm: free resources after canceling async_pf

When we cancel 'async_pf_execute()', we should behave as if the work was
never scheduled in 'kvm_setup_async_pf()'.
Fixes a bug when we can't unload module because the vm wasn't destroyed.

Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 28b441e24088081c1e213139d1303b451a34a4f4)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoKVM: mmu: allow page tables to be in read-only slots
Paolo Bonzini [Mon, 9 Sep 2013 11:52:33 +0000 (13:52 +0200)]
KVM: mmu: allow page tables to be in read-only slots

Page tables in a read-only memory slot will currently cause a triple
fault because the page walker uses gfn_to_hva and it fails on such a slot.

OVMF uses such a page table; however, real hardware seems to be fine with
that as long as the accessed/dirty bits are set.  Save whether the slot
is readonly, and later check it when updating the accessed and dirty bits.

Reviewed-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit ba6a3541545542721ce821d1e7e5ce35752e6fdf)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: Add newlines to panic strings
Christoffer Dall [Wed, 14 Aug 2013 19:33:48 +0000 (12:33 -0700)]
ARM: KVM: Add newlines to panic strings

The panic strings are hard to read and on narrow terminals some
characters are simply truncated off the panic message.

Make is slightly prettier with a newline in the Hyp panic strings.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 1fe40f6d39d23f39e643607a3e1883bfc74f1244)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: Work around older compiler bug
Christoffer Dall [Mon, 19 Aug 2013 21:16:57 +0000 (14:16 -0700)]
ARM: KVM: Work around older compiler bug

Compilers before 4.6 do not behave well with unnamed fields in structure
initializers and therefore produces build errors:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10676

By refering to the unnamed union using braces, both older and newer
compilers produce the same result.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reported-by: Russell King <linux@arm.linux.org.uk>
Tested-by: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 6833d83891140aedab7841589b7c7dbd7b600235)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: Simplify tracepoint text
Christoffer Dall [Fri, 9 Aug 2013 03:34:22 +0000 (20:34 -0700)]
ARM: KVM: Simplify tracepoint text

The tracepoint for kvm_guest_fault was extremely long, make it a
slightly bit shorter.

Cc: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 6e72cc5700fe6b8776d537b736dab64b21ae0f1f)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: Fix kvm_set_pte assignment
Christoffer Dall [Fri, 9 Aug 2013 03:35:07 +0000 (20:35 -0700)]
ARM: KVM: Fix kvm_set_pte assignment

THe kvm_set_pte function was actually assigning the entire struct to the
structure member, which should work because the structure only has that
one member, but it is still not very nice.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 0963e5d0f22f9d197dbf206d8b5b2a150722cf5e)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: vgic: Bump VGIC_NR_IRQS to 256
Christoffer Dall [Thu, 29 Aug 2013 10:08:25 +0000 (11:08 +0100)]
ARM: KVM: vgic: Bump VGIC_NR_IRQS to 256

The Versatile Express TC2 board, which we use as our main emulated
platform in QEMU, defines 160+32 == 192 interrupts, so limiting the
number of interrupts to 128 is not quite going to cut it for real board
emulation.

Note that this didn't use to be a problem because QEMU was buggy and
only defined 128 interrupts until recently.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
(cherry picked from commit 9b2d2e0df8a49414b1e5bc89148c9984dd87782a)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: Bugfix: vgic_bytemap_get_reg per cpu regs
Christoffer Dall [Thu, 29 Aug 2013 10:08:24 +0000 (11:08 +0100)]
ARM: KVM: Bugfix: vgic_bytemap_get_reg per cpu regs

For bytemaps each IRQ field is 1 byte wide, so we pack 4 irq fields in
one word and since there are 32 private (per cpu) irqs, we have 8
private u32 fields on the vgic_bytemap struct.  We shift the offset from
the base of the register group right by 2, giving us the word index
instead of the field index.  But then there are 8 private words, not 4,
which is also why we subtract 8 words from the offset of the shared
words.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
(cherry picked from commit 8d98915b6bda499e47d19166101d0bbcfd409c80)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: vgic: fix GICD_ICFGRn access
Marc Zyngier [Thu, 29 Aug 2013 10:08:23 +0000 (11:08 +0100)]
ARM: KVM: vgic: fix GICD_ICFGRn access

All the code in handle_mmio_cfg_reg() assumes the offset has
been shifted right to accomodate for the 2:1 bit compression,
but this is only done when getting the register address.

Shift the offset early so the code works mostly unchanged.

Reported-by: Zhaobo (Bob, ERC) <zhaobo@huawei.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
(cherry picked from commit 6545eae3d7a1b6dc2edb8ede9107998aee1207ef)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: vgic: simplify vgic_get_target_reg
Marc Zyngier [Thu, 29 Aug 2013 10:08:22 +0000 (11:08 +0100)]
ARM: KVM: vgic: simplify vgic_get_target_reg

vgic_get_target_reg is quite complicated, for no good reason.
Actually, it is fairly easy to write it in a much more efficient
way by using the target CPU array instead of the bitmap.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
(cherry picked from commit 986af8e0789a41ac4844e6eefed4a33e86524918)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agokvm: optimize away THP checks in kvm_is_mmio_pfn()
Andrea Arcangeli [Thu, 25 Jul 2013 01:04:38 +0000 (03:04 +0200)]
kvm: optimize away THP checks in kvm_is_mmio_pfn()

The checks on PG_reserved in the page structure on head and tail pages
aren't necessary because split_huge_page wouldn't transfer the
PG_reserved bit from head to tail anyway.

This was a forward-thinking check done in the case PageReserved was
set by a driver-owned page mapped in userland with something like
remap_pfn_range in a VM_PFNMAP region, but using hugepmds (not
possible right now). It was meant to be very safe, but it's overkill
as it's unlikely split_huge_page could ever run without the driver
noticing and tearing down the hugepage itself.

And if a driver in the future will really want to map a reserved
hugepage in userland using an huge pmd it should simply take care of
marking all subpages reserved too to keep KVM safe. This of course
would require such a hypothetical driver to tear down the huge pmd
itself and splitting the hugepage itself, instead of relaying on
split_huge_page, but that sounds very reasonable, especially
considering split_huge_page wouldn't currently transfer the reserved
bit anyway.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
(cherry picked from commit 11feeb498086a3a5907b8148bdf1786a9b18fc55)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agokvm: use anon_inode_getfd() with O_CLOEXEC flag
Yann Droneaud [Sat, 24 Aug 2013 20:14:07 +0000 (22:14 +0200)]
kvm: use anon_inode_getfd() with O_CLOEXEC flag

KVM uses anon_inode_get() to allocate file descriptors as part
of some of its ioctls. But those ioctls are lacking a flag argument
allowing userspace to choose options for the newly opened file descriptor.

In such case it's advised to use O_CLOEXEC by default so that
userspace is allowed to choose, without race, if the file descriptor
is going to be inherited across exec().

This patch set O_CLOEXEC flag on all file descriptors created
with anon_inode_getfd() to not leak file descriptors across exec().

Signed-off-by: Yann Droneaud <ydroneaud@opteya.com>
Link: http://lkml.kernel.org/r/cover.1377372576.git.ydroneaud@opteya.com
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
(cherry picked from commit 24009b0549de563006705b9af8694fc8fc9a5aa1)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: 7808/1: KVM: mm: Get rid of L_PTE_USER ref from PAGE_S2_DEVICE
Christoffer Dall [Tue, 6 Aug 2013 04:34:16 +0000 (05:34 +0100)]
ARM: 7808/1: KVM: mm: Get rid of L_PTE_USER ref from PAGE_S2_DEVICE

THe L_PTE_USER actually has nothing to do with stage 2 mappings and the
L_PTE_S2_RDWR value sets the readable bit, which was what L_PTE_USER
was used for before proper handling of stage 2 memory defines.

Changelog:
  [v3]: Drop call to kvm_set_s2pte_writable in mmu.c
  [v2]: Change default mappings to be r/w instead of r/o, as per Marc
     Zyngier's suggestion.

Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
(cherry picked from commit 8947c09d05da9f0436f423518f449beaa5ea1bdc)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: kvm: use inner-shareable barriers after TLB flushing
Will Deacon [Mon, 13 May 2013 11:08:06 +0000 (12:08 +0100)]
ARM: kvm: use inner-shareable barriers after TLB flushing

When flushing the TLB at PL2 in response to remapping at stage-2 or VMID
rollover, we have a dsb instruction to ensure completion of the command
before continuing.

Since we only care about other processors for TLB invalidation, use the
inner-shareable variant of the dsb instruction instead.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit e3ab547f57bd626201d4b715b696c80ad1ef4ba2)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoKVM: ARM: Squash len warning
Christoffer Dall [Tue, 30 Jul 2013 03:46:04 +0000 (20:46 -0700)]
KVM: ARM: Squash len warning

The 'len' variable was declared an unsigned and then checked for less
than 0, which results in warnings on some compilers.  Since len is
assigned an int, make it an int.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 2184a60de26b94bc5a88de3e5a960ef9ff54ba5a)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: use 'int' instead of 'u32' for variable 'target' in kvm_host.h.
Chen Gang [Mon, 22 Jul 2013 03:40:38 +0000 (04:40 +0100)]
arm64: KVM: use 'int' instead of 'u32' for variable 'target' in kvm_host.h.

'target' will be set to '-1' in kvm_arch_vcpu_init(), and it need check
'target' whether less than zero or not in kvm_vcpu_initialized().

So need define target as 'int' instead of 'u32', just like ARM has done.

The related warning:

  arch/arm64/kvm/../../../arch/arm/kvm/arm.c:497:2: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits]

Signed-off-by: Chen Gang <gang.chen@asianux.com>
[Marc: reformated the Subject line to fit the series]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 6c8c0c4dc0e98ee2191211d66e9f876e95787073)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: add missing dsb before invalidating Stage-2 TLBs
Marc Zyngier [Tue, 11 Jun 2013 17:05:25 +0000 (18:05 +0100)]
arm64: KVM: add missing dsb before invalidating Stage-2 TLBs

When performing a Stage-2 TLB invalidation, it is necessary to
make sure the write to the page tables is observable by all CPUs.

For this purpose, add dsb instructions to __kvm_tlb_flush_vmid_ipa
and __kvm_flush_vm_context before doing the TLB invalidation itself.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit f142e5eeb724cfbedd203b32b3b542d78dbe2545)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: perform save/restore of PAR_EL1
Marc Zyngier [Fri, 7 Jun 2013 10:02:34 +0000 (11:02 +0100)]
arm64: KVM: perform save/restore of PAR_EL1

Not saving PAR_EL1 is an unfortunate oversight. If the guest
performs an AT* operation and gets scheduled out before reading
the result of the translation from PAREL1, it could become
corrupted by another guest or the host.

Saving this register is made slightly more complicated as KVM also
uses it on the permission fault handling path, leading to an ugly
"stash and restore" sequence. Fortunately, this is already a slow
path so we don't really care. Also, Linux doesn't do any AT*
operation, so Linux guests are not impacted by this bug.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 1bbd80549810637b7381ab0649ba7c7d62f1342a)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: fix 2-level page tables unmapping
Marc Zyngier [Tue, 6 Aug 2013 12:05:48 +0000 (13:05 +0100)]
arm64: KVM: fix 2-level page tables unmapping

When using 64kB pages, we only have two levels of page tables,
meaning that PGD, PUD and PMD are fused. In this case, trying
to refcount PUDs and PMDs independently is a a complete disaster,
as they are the same.

We manage to get it right for the allocation (stage2_set_pte uses
{pmd,pud}_none), but the unmapping path clears both pud and pmd
refcounts, which fails spectacularly with 2-level page tables.

The fix is to avoid calling clear_pud_entry when both the pmd and
pud pages are empty. For this, and instead of introducing another
pud_empty function, consolidate both pte_empty and pmd_empty into
page_empty (the code is actually identical) and use that to also
test the validity of the pud.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 979acd5e18c3e5cb7e3308c699d79553af5af8c6)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: Fix unaligned unmap_range leak
Christoffer Dall [Tue, 6 Aug 2013 20:50:54 +0000 (13:50 -0700)]
ARM: KVM: Fix unaligned unmap_range leak

The unmap_range function did not properly cover the case when the start
address was not aligned to PMD_SIZE or PUD_SIZE and an entire pte table
or pmd table was cleared, causing us to leak memory when incrementing
the addr.

The fix is to always move onto the next page table entry boundary
instead of adding the full size of the VA range covered by the
corresponding table level entry.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit d3840b26614d8ce3db53c98061d9fcb1b9ccb0dd)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoKVM: Introduce kvm_arch_memslots_updated()
Takuya Yoshikawa [Thu, 4 Jul 2013 04:40:29 +0000 (13:40 +0900)]
KVM: Introduce kvm_arch_memslots_updated()

This is called right after the memslots is updated, i.e. when the result
of update_memslots() gets installed in install_new_memslots().  Since
the memslots needs to be updated twice when we delete or move a memslot,
kvm_arch_commit_memory_region() does not correspond to this exactly.

In the following patch, x86 will use this new API to check if the mmio
generation has reached its maximum value, in which case mmio sptes need
to be flushed out.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Acked-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit e59dbe09f8e6fb8f6ee19dc79d1a2f14299e4cd2)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: Kconfig integration
Marc Zyngier [Thu, 4 Jul 2013 12:34:32 +0000 (13:34 +0100)]
arm64: KVM: Kconfig integration

Finally plug KVM/arm64 into the config system, making it possible
to enable KVM support on AArch64 CPUs.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit c3eb5b14449a0949e9764d39374a2ea63faae14f)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: kvm: don't include drivers/virtio/Kconfig
Arnd Bergmann [Fri, 21 Jun 2013 20:33:22 +0000 (22:33 +0200)]
ARM: kvm: don't include drivers/virtio/Kconfig

The virtio configuration has recently moved and is now visible everywhere.
Including the file again from KVM as we used to need earlier now causes
dependency problems:

warning: (CAIF_VIRTIO && VIRTIO_PCI && VIRTIO_MMIO && REMOTEPROC && RPMSG)
selects VIRTIO which has unmet direct dependencies (VIRTUALIZATION)

Cc: Christoffer Dall <cdall@cs.columbia.edu>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 8bd4ffd6b3a98f00267051dc095076ea2ff06ea8)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm/kvm: Cleanup KVM_ARM_MAX_VCPUS logic
Geoff Levand [Fri, 7 Jun 2013 01:02:54 +0000 (18:02 -0700)]
arm/kvm: Cleanup KVM_ARM_MAX_VCPUS logic

Commit d21a1c83c7595e387545632e44cd7797b76e19cc (ARM: KVM: define KVM_ARM_MAX_VCPUS
unconditionally) changed the Kconfig logic for KVM_ARM_MAX_VCPUS to work around a
build error arising from the use of KVM_ARM_MAX_VCPUS when CONFIG_KVM=n.  The
resulting Kconfig logic is a bit awkward and leaves a KVM_ARM_MAX_VCPUS always
defined in the kernel config file.

This change reverts the Kconfig logic back and adds a simple preprocessor
conditional in kvm_host.h to handle when CONFIG_KVM_ARM_MAX_VCPUS is undefined.

Signed-off-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit f2dda9d829818b055510187059cdfa4ece10c82d)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: get rid of S2_PGD_SIZE
Marc Zyngier [Tue, 14 May 2013 11:11:39 +0000 (12:11 +0100)]
ARM: KVM: get rid of S2_PGD_SIZE

S2_PGD_SIZE defines the number of pages used by a stage-2 PGD
and is unused, except for a VM_BUG_ON check that missuses the
define.

As the check is very unlikely to ever triggered except in
circumstances where KVM is the least of our worries, just kill
both the define and the VM_BUG_ON check.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
(cherry picked from commit 4db845c3d8e2f8a219e8ac48834dd4fe085e5d63)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: don't special case PC when doing an MMIO
Marc Zyngier [Tue, 14 May 2013 11:11:38 +0000 (12:11 +0100)]
ARM: KVM: don't special case PC when doing an MMIO

Admitedly, reading a MMIO register to load PC is very weird.
Writing PC to a MMIO register is probably even worse. But
the architecture doesn't forbid any of these, and injecting
a Prefetch Abort is the wrong thing to do anyway.

Remove this check altogether, and let the adventurous guest
wander into LaLaLand if they feel compelled to do so.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
(cherry picked from commit 8734f16fb2aa4ff0bb57ad6532661a38bc8ff957)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: use phys_addr_t instead of unsigned long long for HYP PGDs
Marc Zyngier [Tue, 14 May 2013 11:11:37 +0000 (12:11 +0100)]
ARM: KVM: use phys_addr_t instead of unsigned long long for HYP PGDs

HYP PGDs are passed around as phys_addr_t, except just before calling
into the hypervisor init code, where they are cast to a rather weird
unsigned long long.

Just keep them around as phys_addr_t, which is what makes the most
sense.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
(cherry picked from commit dac288f7b38a7439502b77dabcdf8a9a5c4ae721)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: remove dead prototype for __kvm_tlb_flush_vmid
Marc Zyngier [Tue, 14 May 2013 11:11:35 +0000 (12:11 +0100)]
ARM: KVM: remove dead prototype for __kvm_tlb_flush_vmid

__kvm_tlb_flush_vmid has been renamed to __kvm_tlb_flush_vmid_ipa,
and the old prototype should have been removed when the code was
modified.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
(cherry picked from commit 368074d908b785588778f00b4384376cd636f4a1)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: Don't handle PSCI calls via SMC
Dave P Martin [Wed, 1 May 2013 16:49:28 +0000 (17:49 +0100)]
ARM: KVM: Don't handle PSCI calls via SMC

Currently, kvmtool unconditionally declares that HVC should be used
to call PSCI, so the function numbers in the DT tell the guest
nothing about the function ID namespace or calling convention for
SMC.

We already assume that the guest will examine and honour the DT,
since there is no way it could possibly guess the KVM-specific PSCI
function IDs otherwise.  So let's not encourage guests to violate
what's specified in the DT by using SMC to make the call.

[ Modified to apply to top of kvm/arm tree - Christoffer ]

Signed-off-by: Dave P Martin <Dave.Martin@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
(cherry picked from commit 24a7f675752e06729589d40a5256970998a21502)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: Allow host virt timer irq to be different from guest timer virt irq
Anup Patel [Tue, 30 Apr 2013 06:32:15 +0000 (12:02 +0530)]
ARM: KVM: Allow host virt timer irq to be different from guest timer virt irq

The arch_timer irq numbers (or PPI numbers) are implementation dependent,
so the host virtual timer irq number can be different from guest virtual
timer irq number.

This patch ensures that host virtual timer irq number is read from DTB and
guest virtual timer irq is determined based on vcpu target type.

Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
(cherry picked from commit 5ae7f87a56fab10b8f9b135a8377c144397293ca)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: document kernel object mappings in HYP
Marc Zyngier [Thu, 2 May 2013 13:31:03 +0000 (14:31 +0100)]
arm64: KVM: document kernel object mappings in HYP

HYP mode has access to some of the kernel pages. Document the
memory mapping and the offset between kernel VA and HYP VA.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit aa4a73a0a23a65a2f531d01f1865d1e61c6acb55)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: MAINTAINERS update
Marc Zyngier [Tue, 2 Apr 2013 16:49:40 +0000 (17:49 +0100)]
arm64: KVM: MAINTAINERS update

Elect myself as the KVM/arm64 maintainer.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 6394a3ec02ab39147aab9ea56d0dabafd3dcae60)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: userspace API documentation
Marc Zyngier [Tue, 2 Apr 2013 16:46:31 +0000 (17:46 +0100)]
arm64: KVM: userspace API documentation

Unsurprisingly, the arm64 userspace API is extremely similar to
the 32bit one, the only significant difference being the ONE_REG
register mapping.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 379e04c79e8a9ded8a202f1e266f0c5830185bea)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: enable initialization of a 32bit vcpu
Marc Zyngier [Thu, 7 Feb 2013 10:46:46 +0000 (10:46 +0000)]
arm64: KVM: enable initialization of a 32bit vcpu

Wire the init of a 32bit vcpu by allowing 32bit modes in pstate,
and providing sensible defaults out of reset state.

This feature is of course conditioned by the presence of 32bit
capability on the physical CPU, and is checked by the KVM_CAP_ARM_EL1_32BIT
capability.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 0d854a60b1d7d39a37b25dd28f63cfa0df637b91)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: 32bit guest fault injection
Marc Zyngier [Wed, 6 Feb 2013 11:29:35 +0000 (11:29 +0000)]
arm64: KVM: 32bit guest fault injection

Add fault injection capability for 32bit guests.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit e82e030556e42e823e174e0c3bd97988d1a09d1f)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: 32bit specific register world switch
Marc Zyngier [Thu, 7 Feb 2013 10:52:10 +0000 (10:52 +0000)]
arm64: KVM: 32bit specific register world switch

Allow registers specific to 32bit guests to be saved/restored
during the world switch.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit b4afad06c19e3489767532f86ff453a1d1e28b8c)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: CPU specific 32bit coprocessor access
Marc Zyngier [Thu, 7 Feb 2013 10:50:18 +0000 (10:50 +0000)]
arm64: KVM: CPU specific 32bit coprocessor access

Enable handling of CPU specific 32bit coprocessor access. Not much
here either.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 06c7654d2fb8bac7b1af4340ad59434a5d89b86a)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: 32bit handling of coprocessor traps
Marc Zyngier [Thu, 7 Feb 2013 10:32:33 +0000 (10:32 +0000)]
arm64: KVM: 32bit handling of coprocessor traps

Provide the necessary infrastructure to trap coprocessor accesses that
occur when running 32bit guests.

Also wire SMC and HVC trapped in 32bit mode while were at it.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 62a89c44954f09072bf07a714c8f68bda14ab87e)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: 32bit conditional execution emulation
Marc Zyngier [Wed, 6 Feb 2013 19:54:04 +0000 (19:54 +0000)]
arm64: KVM: 32bit conditional execution emulation

As conditional instructions can trap on AArch32, add the thinest
possible emulation layer to keep 32bit guests happy.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 27b190bd9fbfee34536cb858f0b5924d294aac38)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: 32bit GP register access
Marc Zyngier [Wed, 6 Feb 2013 19:40:29 +0000 (19:40 +0000)]
arm64: KVM: 32bit GP register access

Allow access to the 32bit register file through the usual API.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit b547631fc64e249a3c507e6ce854642507fa7c1c)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: define 32bit specific registers
Marc Zyngier [Wed, 6 Feb 2013 19:17:50 +0000 (19:17 +0000)]
arm64: KVM: define 32bit specific registers

Define the 32bit specific registers (SPSRs, cp15...).

Most CPU registers are directly mapped to a 64bit register
(r0->x0...). Only the SPSRs have separate registers.

cp15 registers are also mapped into their 64bit counterpart in most
cases.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 40033a614ea3db196d57c477ca328f44eb1e4df0)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: Build system integration
Marc Zyngier [Mon, 10 Dec 2012 16:41:44 +0000 (16:41 +0000)]
arm64: KVM: Build system integration

Only the Makefile is plugged in. The Kconfig stuff is in a separate
patch to allow for an easier merge process.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 6211753fdfd05af9e08f54c8d0ba3ee516034878)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: PSCI implementation
Marc Zyngier [Wed, 12 Dec 2012 18:52:05 +0000 (18:52 +0000)]
arm64: KVM: PSCI implementation

Wire the PSCI backend into the exit handling code.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit dcd2e40c1e1cce302498d16d095b0f8a30326f74)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: Plug the arch timer
Marc Zyngier [Fri, 7 Dec 2012 17:52:03 +0000 (17:52 +0000)]
arm64: KVM: Plug the arch timer

Add support for the in-kernel timer emulation.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 003300de6c3e51934fb52eb2677f6f4fb4996cbd)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: timer: allow DT matching for ARMv8 cores
Marc Zyngier [Thu, 30 May 2013 17:31:28 +0000 (18:31 +0100)]
ARM: KVM: timer: allow DT matching for ARMv8 cores

ARMv8 cores have the exact same timer as ARMv7 cores. Make sure the
KVM timer code can match it in the device tree.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit f61701e0a24a09aa4a44baf24e57dcc5e706afa8)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: Plug the VGIC
Marc Zyngier [Fri, 7 Dec 2012 17:54:54 +0000 (17:54 +0000)]
arm64: KVM: Plug the VGIC

Add support for the in-kernel GIC emulation.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 1f17f3b6044d8a81a74dc6c962b3b38a7336106b)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: Exit handling
Marc Zyngier [Mon, 10 Dec 2012 16:40:41 +0000 (16:40 +0000)]
arm64: KVM: Exit handling

Handle the exit of a VM, decoding the exit reason from HYP mode
and calling the corresponding handler.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit c4b1afd022e93eada6ee4b209be37101cd4b3494)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: HYP mode world switch implementation
Marc Zyngier [Mon, 10 Dec 2012 16:40:18 +0000 (16:40 +0000)]
arm64: KVM: HYP mode world switch implementation

The HYP mode world switch in all its glory.

Implements save/restore of host/guest registers, EL2 trapping,
IPA resolution, and additional services (tlb invalidation).

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 55c7401d92e16360e0987afe39355f1eb6300f31)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: hypervisor initialization code
Marc Zyngier [Mon, 17 Dec 2012 17:07:52 +0000 (17:07 +0000)]
arm64: KVM: hypervisor initialization code

Provide EL2 with page tables and stack, and set the vectors
to point to the full blown world-switch code.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 092bd143cbb481b4ce1d55247a2987eaaf61f967)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: guest one-reg interface
Marc Zyngier [Mon, 10 Dec 2012 16:37:02 +0000 (16:37 +0000)]
arm64: KVM: guest one-reg interface

Let userspace play with the guest registers.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 2f4a07c5f9fe4a5cdb9867e1e2fcab3165846ea7)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: MMIO access backend
Marc Zyngier [Mon, 10 Dec 2012 16:29:50 +0000 (16:29 +0000)]
arm64: KVM: MMIO access backend

Define the necessary structures to perform an MMIO access.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit d7246bf3571a82834984a42db52261525bc11159)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: kvm_arch and kvm_vcpu_arch definitions
Marc Zyngier [Mon, 10 Dec 2012 16:29:28 +0000 (16:29 +0000)]
arm64: KVM: kvm_arch and kvm_vcpu_arch definitions

Provide the architecture dependent structures for VM and
vcpu abstractions.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 4f8d6632ec71372a3b8dbb4775662c2c9025d173)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: virtual CPU reset
Marc Zyngier [Mon, 10 Dec 2012 16:23:59 +0000 (16:23 +0000)]
arm64: KVM: virtual CPU reset

Provide the reset code for a virtual CPU booted in 64bit mode.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit f4672752c321ea36ce099cebdd7a082a8f327505)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: CPU specific system registers handling
Marc Zyngier [Wed, 6 Feb 2013 17:30:48 +0000 (17:30 +0000)]
arm64: KVM: CPU specific system registers handling

Add the support code for CPU specific system registers. Not much
here yet.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit b990a9d3152bddca62cc1f8bf80518430b98737b)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: system register handling
Marc Zyngier [Mon, 10 Dec 2012 16:15:34 +0000 (16:15 +0000)]
arm64: KVM: system register handling

Provide 64bit system register handling, modeled after the cp15
handling for ARM.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 7c8c5e6a9101ea57a1c2c9faff0917e79251a21e)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: user space interface
Marc Zyngier [Mon, 10 Dec 2012 16:29:28 +0000 (16:29 +0000)]
arm64: KVM: user space interface

Provide the kvm.h file that defines the user space visible
interface.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 54f81d0eb93896da73d1636bca84cf90f52cabdf)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: architecture specific MMU backend
Marc Zyngier [Mon, 10 Dec 2012 15:35:24 +0000 (15:35 +0000)]
arm64: KVM: architecture specific MMU backend

Define the arm64 specific MMU backend:
- HYP/kernel VA offset
- S2 4/64kB definitions
- S2 page table populating and flushing
- icache cleaning

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 37c437532b0126d1df5685080db9cecf3d918175)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: fault injection into a guest
Marc Zyngier [Mon, 17 Dec 2012 12:27:42 +0000 (12:27 +0000)]
arm64: KVM: fault injection into a guest

Implement the injection of a fault (undefined, data abort or
prefetch abort) into a 64bit guest.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit aa8eff9bfbd531e0fcc8e68052f4ac545cd004c5)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: Basic ESR_EL2 helpers and vcpu register access
Marc Zyngier [Mon, 10 Dec 2012 13:27:52 +0000 (13:27 +0000)]
arm64: KVM: Basic ESR_EL2 helpers and vcpu register access

Implements helpers for dealing with the EL2 syndrome register as
well as accessing the vcpu registers.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 83a4979483c8e597b69d4403794f87fea51fa549)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: system register definitions for 64bit guests
Marc Zyngier [Mon, 10 Dec 2012 11:16:40 +0000 (11:16 +0000)]
arm64: KVM: system register definitions for 64bit guests

Define the saved/restored registers for 64bit guests.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit fd9fc9f73cc2070d2637a7ee082800a817fd45f3)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: EL2 register definitions
Marc Zyngier [Mon, 10 Dec 2012 10:46:47 +0000 (10:46 +0000)]
arm64: KVM: EL2 register definitions

Define all the useful bitfields for EL2 registers.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 0369f6a34b9facd16eea4236518ca6f9cbc9e5ef)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoarm64: KVM: HYP mode idmap support
Marc Zyngier [Fri, 7 Dec 2012 18:40:43 +0000 (18:40 +0000)]
arm64: KVM: HYP mode idmap support

Add the necessary infrastructure for identity-mapped HYP page
tables. Idmap-ed code must be in the ".hyp.idmap.text" linker
section.

The rest of the HYP ends up in ".hyp.text".

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 2240bbb697354f5617d95e3ee104ca61bb812507)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: arch_timers: zero CNTVOFF upon return to host
Mark Rutland [Tue, 26 Mar 2013 13:41:35 +0000 (13:41 +0000)]
ARM: KVM: arch_timers: zero CNTVOFF upon return to host

To use the virtual counters from the host, we need to ensure that
CNTVOFF doesn't change unexpectedly. When we change to a guest, we
replace the host's CNTVOFF, but we don't restore it when returning to
the host.

As the host sets CNTVOFF to zero, and never changes it, we can simply
zero CNTVOFF when returning to the host. This patch adds said zeroing to
the return to host path.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Christoffer Dall <cdall@cs.columbia.edu>
(cherry picked from commit f793c23ebbe5afd1cabf4a42a3a297022213756f)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoKVM: get rid of $(addprefix ../../../virt/kvm/, ...) in Makefiles
Marc Zyngier [Tue, 14 May 2013 13:31:02 +0000 (14:31 +0100)]
KVM: get rid of $(addprefix ../../../virt/kvm/, ...) in Makefiles

As requested by the KVM maintainers, remove the addprefix used to
refer to the main KVM code from the arch code, and replace it with
a KVM variable that does the same thing.

Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: Christoffer Dall <cdall@cs.columbia.edu>
Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
(cherry picked from commit 535cf7b3b13c7faed3dfabafb6598417de1129ca)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoARM: KVM: move GIC/timer code to a common location
Marc Zyngier [Tue, 14 May 2013 13:31:01 +0000 (14:31 +0100)]
ARM: KVM: move GIC/timer code to a common location

As KVM/arm64 is looming on the horizon, it makes sense to move some
of the common code to a single location in order to reduce duplication.

The code could live anywhere. Actually, most of KVM is already built
with a bunch of ugly ../../.. hacks in the various Makefiles, so we're
not exactly talking about style here. But maybe it is time to start
moving into a less ugly direction.

The include files must be in a "public" location, as they are accessed
from non-KVM files (arch/arm/kernel/asm-offsets.c).

For this purpose, introduce two new locations:
- virt/kvm/arm/ : x86 and ia64 already share the ioapic code in
  virt/kvm, so this could be seen as a (very ugly) precedent.
- include/kvm/  : there is already an include/xen, and while the
  intent is slightly different, this seems as good a location as
  any

Eventually, we should probably have independant Makefiles at every
levels (just like everywhere else in the kernel), but this is just
the first step.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
(cherry picked from commit 7275acdfe29ba03ad2f6e150386900c4e2d43fb1)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoKVM: add missing misc_deregister() on error in kvm_init()
Wei Yongjun [Sun, 5 May 2013 12:03:40 +0000 (20:03 +0800)]
KVM: add missing misc_deregister() on error in kvm_init()

Add the missing misc_deregister() before return from kvm_init()
in the debugfs init error handling case.

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
(cherry picked from commit afc2f792cdcb67f4257f0e68d10ee4a7b7eae57a)
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
9 years agoMerge tag 'v3.10.13' into lsk/v3.10/topic/kvm
Christoffer Dall [Thu, 2 Oct 2014 15:10:08 +0000 (17:10 +0200)]
Merge tag 'v3.10.13' into lsk/v3.10/topic/kvm

This is the 3.10.13 stable release

9 years agoarm64: add support for reserved memory defined by device tree
Marek Szyprowski [Fri, 28 Feb 2014 13:42:55 +0000 (14:42 +0100)]
arm64: add support for reserved memory defined by device tree

Enable reserved memory initialization from device tree.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
(cherry picked from commit 9bf14b7c540ae9ca7747af3a0c0d8470ef77b6ce)
Signed-off-by: Mark Brown <broonie@kernel.org>
9 years agoMerge remote-tracking branch 'lsk/v3.10/topic/libfdt' into lsk-v3.10-arm64-misc
Mark Brown [Sat, 13 Sep 2014 17:23:07 +0000 (10:23 -0700)]
Merge remote-tracking branch 'lsk/v3.10/topic/libfdt' into lsk-v3.10-arm64-misc

Conflicts:
drivers/of/fdt.c

9 years agoarm64: atomics: fix use of acquire + release for full barrier semantics
Will Deacon [Tue, 4 Feb 2014 12:29:12 +0000 (12:29 +0000)]
arm64: atomics: fix use of acquire + release for full barrier semantics

Linux requires a number of atomic operations to provide full barrier
semantics, that is no memory accesses after the operation can be
observed before any accesses up to and including the operation in
program order.

On arm64, these operations have been incorrectly implemented as follows:

// A, B, C are independent memory locations

<Access [A]>

// atomic_op (B)
1: ldaxr x0, [B] // Exclusive load with acquire
<op(B)>
stlxr w1, x0, [B] // Exclusive store with release
cbnz w1, 1b

<Access [C]>

The assumption here being that two half barriers are equivalent to a
full barrier, so the only permitted ordering would be A -> B -> C
(where B is the atomic operation involving both a load and a store).

Unfortunately, this is not the case by the letter of the architecture
and, in fact, the accesses to A and C are permitted to pass their
nearest half barrier resulting in orderings such as Bl -> A -> C -> Bs
or Bl -> C -> A -> Bs (where Bl is the load-acquire on B and Bs is the
store-release on B). This is a clear violation of the full barrier
requirement.

The simple way to fix this is to implement the same algorithm as ARMv7
using explicit barriers:

<Access [A]>

// atomic_op (B)
dmb ish // Full barrier
1: ldxr x0, [B] // Exclusive load
<op(B)>
stxr w1, x0, [B] // Exclusive store
cbnz w1, 1b
dmb ish // Full barrier

<Access [C]>

but this has the undesirable effect of introducing *two* full barrier
instructions. A better approach is actually the following, non-intuitive
sequence:

<Access [A]>

// atomic_op (B)
1: ldxr x0, [B] // Exclusive load
<op(B)>
stlxr w1, x0, [B] // Exclusive store with release
cbnz w1, 1b
dmb ish // Full barrier

<Access [C]>

The simple observations here are:

  - The dmb ensures that no subsequent accesses (e.g. the access to C)
    can enter or pass the atomic sequence.

  - The dmb also ensures that no prior accesses (e.g. the access to A)
    can pass the atomic sequence.

  - Therefore, no prior access can pass a subsequent access, or
    vice-versa (i.e. A is strictly ordered before C).

  - The stlxr ensures that no prior access can pass the store component
    of the atomic operation.

The only tricky part remaining is the ordering between the ldxr and the
access to A, since the absence of the first dmb means that we're now
permitting re-ordering between the ldxr and any prior accesses.

From an (arbitrary) observer's point of view, there are two scenarios:

  1. We have observed the ldxr. This means that if we perform a store to
     [B], the ldxr will still return older data. If we can observe the
     ldxr, then we can potentially observe the permitted re-ordering
     with the access to A, which is clearly an issue when compared to
     the dmb variant of the code. Thankfully, the exclusive monitor will
     save us here since it will be cleared as a result of the store and
     the ldxr will retry. Notice that any use of a later memory
     observation to imply observation of the ldxr will also imply
     observation of the access to A, since the stlxr/dmb ensure strict
     ordering.

  2. We have not observed the ldxr. This means we can perform a store
     and influence the later ldxr. However, that doesn't actually tell
     us anything about the access to [A], so we've not lost anything
     here either when compared to the dmb variant.

This patch implements this solution for our barriered atomic operations,
ensuring that we satisfy the full barrier requirements where they are
needed.

Cc: <stable@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 8e86f0b409a44193f1587e87b69c5dcf8f65be67)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agoarm64: mm: Make icache synchronisation logic huge page aware
Steve Capper [Wed, 2 Jul 2014 10:46:23 +0000 (11:46 +0100)]
arm64: mm: Make icache synchronisation logic huge page aware

The __sync_icache_dcache routine will only flush the dcache for the
first page of a compound page, potentially leading to stale icache
data residing further on in a hugetlb page.

This patch addresses this issue by taking into consideration the
order of the page when flushing the dcache.

Reported-by: Mark Brown <broonie@linaro.org>
Tested-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: <stable@vger.kernel.org> # v3.11+
(cherry picked from commit 923b8f5044da753e4985ab15c1374ced2cdf616c)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agoarm64: Fix barriers used for page table modifications
Catalin Marinas [Mon, 9 Jun 2014 10:55:03 +0000 (11:55 +0100)]
arm64: Fix barriers used for page table modifications

The architecture specification states that both DSB and ISB are required
between page table modifications and subsequent memory accesses using the
corresponding virtual address. When TLB invalidation takes place, the
tlb_flush_* functions already have the necessary barriers. However, there are
other functions like create_mapping() for which this is not the case.

The patch adds the DSB+ISB instructions in the set_pte() function for
valid kernel mappings. The invalid pte case is handled by tlb_flush_*
and the user mappings in general have a corresponding update_mmu_cache()
call containing a DSB. Even when update_mmu_cache() isn't called, the
kernel can still cope with an unlikely spurious page fault by
re-executing the instruction.

In addition, the set_pmd, set_pud() functions gain an ISB for
architecture compliance when block mappings are created.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Steve Capper <steve.capper@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
(cherry picked from commit 54d6ba0ede61f12b2a03d74bdbf004719a9cfefc)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agoarm64: mm: Optimise tlb flush logic where we have >4K granule
Steve Capper [Fri, 2 May 2014 13:49:00 +0000 (14:49 +0100)]
arm64: mm: Optimise tlb flush logic where we have >4K granule

The tlb maintainence functions: __cpu_flush_user_tlb_range and
__cpu_flush_kern_tlb_range do not take into consideration the page
granule when looping through the address range, and repeatedly flush
tlb entries for the same page when operating with 64K pages.

This patch re-works the logic s.t. we instead advance the loop by
 1 << (PAGE_SHIFT - 12), so avoid repeating ourselves.

Also the routines have been converted from assembler to static inline
functions to aid with legibility and potential compiler optimisations.

The isb() has been removed from flush_tlb_kernel_range(.) as it is
only needed when changing the execute permission of a mapping. If one
needs to set an area of the kernel as execute/non-execute an isb()
must be inserted after the call to flush_tlb_kernel_range.

Cc: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit fa48e6f780a681cdbc7820e33259edfe1a79b9e3)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agoarm64: use correct register width when retrieving ASID
Matthew Leach [Wed, 25 Sep 2013 15:33:13 +0000 (16:33 +0100)]
arm64: use correct register width when retrieving ASID

The ASID is represented as an unsigned int in mm_context_t and we
currently use the mmid assembler macro to access this element of the
struct. This should be accessed with a register of 32-bit width. If
the incorrect register width is used the ASID will be returned in
bits[32:63] of the register when running under big-endian.

Fix a use of the mmid macro in tlb.S to use a 32-bit access.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit fc18047c732f6becba92618a397555927687efd3)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agolib: add fdt_empty_tree.c
Mark Salter [Tue, 4 Feb 2014 16:11:10 +0000 (11:11 -0500)]
lib: add fdt_empty_tree.c

CONFIG_LIBFDT support does not include fdt_empty_tree.c which is
needed by arm64 EFI stub. Add it to libfdt_files.

Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
(cherry picked from commit adaf5687846c25613d58c0a2f5d9e024547cdbec)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agoof/fdt: Convert FDT functions to use libfdt
Rob Herring [Wed, 2 Apr 2014 20:10:14 +0000 (15:10 -0500)]
of/fdt: Convert FDT functions to use libfdt

The kernel FDT functions predate libfdt and are much more limited in
functionality. Also, the kernel functions and libfdt functions are
not compatible with each other because they have different definitions
of node offsets. To avoid this incompatibility and in preparation to
add more FDT parsing functions which will need libfdt, let's first
convert the existing code to use libfdt.

The FDT unflattening, top-level FDT scanning, and property retrieval
functions are converted to use libfdt. The scanning code should be
re-worked to be more efficient and understandable by using libfdt to
find nodes directly by path or compatible strings.

Signed-off-by: Rob Herring <robh@kernel.org>
Tested-by: Michal Simek <michal.simek@xilinx.com>
Tested-by: Grant Likely <grant.likely@linaro.org>
Tested-by: Stephen Chivers <schivers@csc.com>
(cherry picked from commit e6a6928c3ea1d0195ed75a091e345696b916c09b)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
drivers/of/fdt.c

9 years agoof/fdt: update of_get_flat_dt_prop in prep for libfdt
Mark Brown [Thu, 24 Jul 2014 20:06:21 +0000 (21:06 +0100)]
of/fdt: update of_get_flat_dt_prop in prep for libfdt

Make of_get_flat_dt_prop arguments compatible with libfdt fdt_getprop
call in preparation to convert FDT code to use libfdt. Make the return
value const and the property length ptr type an int.

Signed-off-by: Rob Herring <robh@kernel.org>
Tested-by: Michal Simek <michal.simek@xilinx.com>
Tested-by: Grant Likely <grant.likely@linaro.org>
Tested-by: Stephen Chivers <schivers@csc.com>
(cherry picked from commit 9d0c4dfedd96ee54fc075b16d02f82499c8cc3a6)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arc/kernel/devtree.c
arch/arm/kernel/devtree.c
arch/arm/mach-exynos/exynos.c
arch/arm/plat-samsung/s5p-dev-mfc.c
arch/powerpc/kernel/epapr_paravirt.c
arch/powerpc/kernel/prom.c
arch/powerpc/mm/hash_utils_64.c
arch/powerpc/platforms/powernv/opal.c
arch/xtensa/kernel/setup.c
drivers/of/fdt.c

9 years agoof/fdt: remove unused of_scan_flat_dt_by_path
Rob Herring [Sat, 29 Mar 2014 19:14:17 +0000 (14:14 -0500)]
of/fdt: remove unused of_scan_flat_dt_by_path

of_scan_flat_dt_by_path is unused anywhere in the kernel, so remove it.

Signed-off-by: Rob Herring <robh@kernel.org>
Tested-by: Michal Simek <michal.simek@xilinx.com>
Tested-by: Grant Likely <grant.likely@linaro.org>
Tested-by: Stephen Chivers <schivers@csc.com>
(cherry picked from commit bba04d965d06abbbe10afd3687742389107e198e)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
drivers/of/fdt.c

9 years agoof: Fix the section mismatch warnings.
Xiubo Li [Tue, 8 Apr 2014 05:48:07 +0000 (13:48 +0800)]
of: Fix the section mismatch warnings.

In tag next-20140407, building with CONFIG_DEBUG_SECTION_MISMATCH
enabled, the following WARNING is occured:

WARNING: drivers/built-in.o(.text.unlikely+0x2220): Section mismatch
in reference from the function __reserved_mem_check_root() to the
function .init.text:of_get_flat_dt_prop()
The function __reserved_mem_check_root() references
the function __init of_get_flat_dt_prop().
This is often because __reserved_mem_check_root lacks a __init
annotation or the annotation of of_get_flat_dt_prop is wrong.

WARNING: vmlinux.o(.text.unlikely+0xb9d0): Section mismatch in reference
from the function __reserved_mem_check_root() to the (unknown reference)
.init.data:(unknown)
The function __reserved_mem_check_root() references
the (unknown reference) __initdata (unknown).
This is often because __reserved_mem_check_root lacks a __initdata
annotation or the annotation of (unknown) is wrong.

This is cause by :
'drivers: of: add initialization code for dynamic reserved memory'.

Signed-off-by: Xiubo Li <Li.Xiubo@freescale.com>
Signed-off-by: Rob Herring <robh@kernel.org>
(cherry picked from commit 5b6241185e2cded07ca3f5f646b55c641928ba4e)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agoof: only scan for reserved mem when fdt present
Josh Cartwright [Thu, 13 Mar 2014 21:36:36 +0000 (16:36 -0500)]
of: only scan for reserved mem when fdt present

When the reserved memory patches hit -next, several legacy (non-DT) boot
failures were detected and bisected down to that commit. There needs to
be some sanity checking whether a DT is even present before parsing the
reserved ranges.

Reported-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Josh Cartwright <joshc@codeaurora.org>
Tested-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
(cherry picked from commit 2040b52768ebab6e7bd73af0dc63703269c62f17)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agodrivers: of: add support for custom reserved memory drivers
Marek Szyprowski [Fri, 28 Feb 2014 13:42:49 +0000 (14:42 +0100)]
drivers: of: add support for custom reserved memory drivers

Add support for custom reserved memory drivers. Call their init() function
for each reserved region and prepare for using operations provided by them
with by the reserved_mem->ops array.

Based on previous code provided by Josh Cartwright <joshc@codeaurora.org>

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
(cherry picked from commit f618c4703a14672d27bc2ca5d132a844363d6f5f)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agodrivers: of: add initialization code for dynamic reserved memory
Marek Szyprowski [Fri, 28 Feb 2014 13:42:48 +0000 (14:42 +0100)]
drivers: of: add initialization code for dynamic reserved memory

This patch adds support for dynamically allocated reserved memory regions
declared in device tree. Such regions are defined by 'size', 'alignment'
and 'alloc-ranges' properties.

Based on previous code provided by Josh Cartwright <joshc@codeaurora.org>

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
(cherry picked from commit 3f0c8206644836e4f10a6b9fc47cda6a9a372f9b)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agodrivers: of: add initialization code for static reserved memory
Marek Szyprowski [Fri, 28 Feb 2014 13:42:47 +0000 (14:42 +0100)]
drivers: of: add initialization code for static reserved memory

This patch adds support for static (defined by 'reg' property) reserved
memory regions declared in device tree.

Memory blocks can be reliably reserved only during early boot. This must
happen before the whole memory management subsystem is initialized,
because we need to ensure that the given contiguous blocks are not yet
allocated by kernel. Also it must happen before kernel mappings for the
whole low memory are created, to ensure that there will be no mappings
(for reserved blocks). Typically, all this happens before device tree
structures are unflattened, so we need to get reserved memory layout
directly from fdt.

Based on previous code provided by Josh Cartwright <joshc@codeaurora.org>

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
(cherry picked from commit e8d9d1f5485b52ec3c4d7af839e6914438f6c285)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
drivers/of/fdt.c
include/linux/of_fdt.h

9 years agodrivers: of: add function to scan fdt nodes given by path
Marek Szyprowski [Mon, 26 Aug 2013 12:41:56 +0000 (14:41 +0200)]
drivers: of: add function to scan fdt nodes given by path

Add a function to scan the flattened device-tree starting from the
node given by the path. It is used to extract information (like reserved
memory), which is required on early boot before we can unflatten the tree.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Tomasz Figa <t.figa@samsung.com>
Reviewed-by: Rob Herring <rob.herring@calxeda.com>
(cherry picked from commit 57d74bcf3072b65bde5aa540cedc976a75c48e5c)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agoOF: Add helper for matching against linux,stdout-path
Sascha Hauer [Mon, 5 Aug 2013 12:40:44 +0000 (14:40 +0200)]
OF: Add helper for matching against linux,stdout-path

devicetrees may have a linux,stdout-path property in the chosen
node describing the console device. This adds a helper function
to match a device against this property so a driver can call
add_preferred_console for a matching device.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 5c19e95216b93b0d29c6a4887e69a980edc6fc81)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agoof: Specify initrd location using 64-bit
Santosh Shilimkar [Mon, 1 Jul 2013 18:20:35 +0000 (14:20 -0400)]
of: Specify initrd location using 64-bit

On some PAE architectures, the entire range of physical memory could reside
outside the 32-bit limit.  These systems need the ability to specify the
initrd location using 64-bit numbers.

This patch globally modifies the early_init_dt_setup_initrd_arch() function to
use 64-bit numbers instead of the current unsigned long.

There has been quite a bit of debate about whether to use u64 or phys_addr_t.
It was concluded to stick to u64 to be consistent with rest of the device
tree code. As summarized by Geert, "The address to load the initrd is decided
by the bootloader/user and set at that point later in time. The dtb should not
be tied to the kernel you are booting"

More details on the discussion can be found here:
https://lkml.org/lkml/2013/6/20/690
https://lkml.org/lkml/2012/9/13/544

Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Acked-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
(cherry picked from commit 374d5c9964c10373ba39bbe934f4262eb87d7114)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agoarm64: KVM: define HYP and Stage-2 translation page flags
Marc Zyngier [Fri, 7 Dec 2012 18:35:41 +0000 (18:35 +0000)]
arm64: KVM: define HYP and Stage-2 translation page flags

Add HYP and S2 page flags, for both normal and device memory.

Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 363116073a26dbc2903d8417047597eebcc05273)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/include/asm/pgtable-hwdef.h
arch/arm64/include/asm/pgtable.h

9 years agoarm64: Fix for the arm64 kern_addr_valid() function
Dave Anderson [Tue, 15 Apr 2014 17:53:24 +0000 (18:53 +0100)]
arm64: Fix for the arm64 kern_addr_valid() function

Fix for the arm64 kern_addr_valid() function to recognize
virtual addresses in the kernel logical memory map.  The
function fails as written because it does not check whether
the addresses in that region are mapped at the pmd level to
2MB or 512MB pages, continues the page table walk to the
pte level, and issues a garbage value to pfn_valid().

Tested on 4K-page and 64K-page kernels.

Signed-off-by: Dave Anderson <anderson@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit da6e4cb67c6dd1f72257c0a4a97c26dc4e80d3a7)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agoarm64: Clean up the default pgprot setting
Catalin Marinas [Thu, 3 Apr 2014 14:57:15 +0000 (15:57 +0100)]
arm64: Clean up the default pgprot setting

The primary aim of this patchset is to remove the pgprot_default and
prot_sect_default global variables and rely strictly on predefined
values. The original goal was to be able to run SMP kernels on UP
hardware by not setting the Shareability bit. However, it is unlikely to
see UP ARMv8 hardware and even if we do, the Shareability bit is no
longer assumed to disable cacheable accesses.

A side effect is that the device mappings now have the Shareability
attribute set. The hardware, however, should ignore it since Device
accesses are always Outer Shareable.

Following the removal of the two global variables, there is some PROT_*
macro reshuffling and cleanup, including the __PAGE_* macros (replaced
by PAGE_*).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit a501e32430d4232012ab708b8f0ce841f29e0f02)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/include/asm/io.h
arch/arm64/include/asm/pgtable.h
arch/arm64/mm/mmu.c

9 years agoarm64: Add function to create identity mappings
Mark Salter [Wed, 12 Mar 2014 16:28:06 +0000 (12:28 -0400)]
arm64: Add function to create identity mappings

At boot time, before switching to a virtual UEFI memory map, firmware
expects UEFI memory and IO regions to be identity mapped whenever
kernel makes runtime services calls. The existing early boot code
creates an identity map of kernel text/data but this is not sufficient
for UEFI. This patch adds a create_id_mapping() function which reuses
the core code of the existing create_mapping().

Signed-off-by: Mark Salter <msalter@redhat.com>
[ Fixed error message formatting (%pa). ]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
(cherry picked from commit d7ecbddf4caefbac1b99478dd2b679f83dfc2545)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/mm/mmu.c

9 years agoarm64: place initial page tables above the kernel
Mark Rutland [Tue, 24 Jun 2014 15:51:35 +0000 (16:51 +0100)]
arm64: place initial page tables above the kernel

Currently we place swapper_pg_dir and idmap_pg_dir below the kernel
image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However,
bootloaders may use portions of this memory below the kernel and we do
not parse the memory reservation list until after the MMU has been
enabled. As such we may clobber some memory a bootloader wishes to have
preserved.

To enable the use of all of this memory by bootloaders (when the
required memory reservations are communicated to the kernel) it is
necessary to move our initial page tables elsewhere. As we currently
have an effectively unbound requirement for memory at the end of the
kernel image for .bss, we can place the page tables here.

This patch moves the initial page table to the end of the kernel image,
after the BSS. As they do not consist of any initialised data they will
be stripped from the kernel Image as with the BSS. The BSS clearing
routine is updated to stop at __bss_stop rather than _end so as to not
clobber the page tables, and memory reservations made redundant by the
new organisation are removed.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit bd00cd5f8c8c3c282bb1e1eac6a6679a4f808091)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/mm/init.c

9 years agoarm64: Relax the kernel cache requirements for boot
Catalin Marinas [Wed, 26 Mar 2014 18:25:55 +0000 (18:25 +0000)]
arm64: Relax the kernel cache requirements for boot

With system caches for the host OS or architected caches for guest OS we
cannot easily guarantee that there are no dirty or stale cache lines for
the areas of memory written by the kernel during boot with the MMU off
(therefore non-cacheable accesses).

This patch adds the necessary cache maintenance during boot and relaxes
the booting requirements.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit c218bca74eeafa2f8528b6bbb34d112075fcf40a)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/kernel/head.S

9 years agoarm64: head: create a new function for setting the boot_cpu_mode flag
Matthew Leach [Fri, 11 Oct 2013 13:52:16 +0000 (14:52 +0100)]
arm64: head: create a new function for setting the boot_cpu_mode flag

Currently, the code for setting the __cpu_boot_mode flag is munged in
with el2_setup. This makes things difficult on a BE bringup as a
memory access has to have occurred before el2_setup which is the place
that we'd like to set the endianess on the current EL.

Create a new function for setting __cpu_boot_mode and have el2_setup
return the mode the CPU. Also define a new constant in virt.h,
BOOT_CPU_MODE_EL1, for readability.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 828e9834e9a5b7e61046aa3c5f603a4fecba2fb4)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agoarm64: Align the kbuild output for VDSOL and VDSOA
Ian Campbell [Tue, 15 Jul 2014 07:38:08 +0000 (08:38 +0100)]
arm64: Align the kbuild output for VDSOL and VDSOA

Signed-off-by: Ian Campbell <ijc@hellion.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kbuild@vger.kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit ad789ba5f7086138461420d2156478d33fb61077)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agoarm64: vdso: put vdso datapage in a separate vma
Will Deacon [Wed, 9 Jul 2014 18:22:11 +0000 (19:22 +0100)]
arm64: vdso: put vdso datapage in a separate vma

The VDSO datapage doesn't need to be executable (no code there) or
CoW-able (the kernel writes the page, so a private copy is totally
useless).

This patch moves the datapage into its own VMA, identified as "[vvar]"
in /proc/<pid>/maps.

Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 8715493852783358ef8656a0054a14bf822509cf)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agoarm64: Remove duplicate (SWAPPER|IDMAP)_DIR_SIZE definitions
Catalin Marinas [Tue, 15 Jul 2014 14:46:02 +0000 (15:46 +0100)]
arm64: Remove duplicate (SWAPPER|IDMAP)_DIR_SIZE definitions

Just keep the asm/page.h definition as this is included in vmlinux.lds.S
as well.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
(cherry picked from commit b2f8c07bcb7d1a3575f41444d2d8048d0c922762)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agoarm64: head.S: remove unnecessary function alignment
Mark Rutland [Tue, 24 Jun 2014 15:51:34 +0000 (16:51 +0100)]
arm64: head.S: remove unnecessary function alignment

Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't
span any page boundary, which simplifies the idmap and spares us
requiring an additional page table to map half of the function. In
keeping with other important requirements in architecture code, this
fact is undocumented.

Additionally, as the function consists of three instructions totalling
12 bytes with no literal pool data, a smaller alignment of 16 bytes
would be sufficient.

This patch reduces the alignment to 16 bytes and documents the
underlying reason for the alignment. This reduces the required alignment
of the entire .head.text section from 64 bytes to 16 bytes, though it
may still be aligned to a larger value depending on TEXT_OFFSET.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 909a4069da65a5cfca8c968edf9f0d99f694d2f3)
Signed-off-by: Mark Brown <broonie@linaro.org>
9 years agoarm64: export __cpu_{clear,copy}_user_page functions
Mark Salter [Tue, 17 Jun 2014 17:14:26 +0000 (18:14 +0100)]
arm64: export __cpu_{clear,copy}_user_page functions

The __cpu_clear_user_page() and __cpu_copy_user_page() functions
are not currently exported. This prevents modules from using
clear_user_page() and copy_user_page().

Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit bec7cedc8a92bfe96d32febe72634b30c63896bd)
Signed-off-by: Mark Brown <broonie@linaro.org>