Amit Pundir [Tue, 17 May 2016 10:36:17 +0000 (16:06 +0530)]
Revert "drivers: power: use 'current' instead of 'get_current()'"
This reverts commit
e1b5d103894d097fb630aebc3c1fdaf257f7c9bb.
This patch fixed the aosp commit
ad86cc8ad632 (drivers: power:
Add watchdog timer to catch drivers which lockup during suspend.),
which we dropped in Change Id Ic72a87432e27844155467817600adc6cf0c2209c,
so we no longer need this fix. A part of this patch is already reverted
in above mentioned Change Id.
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Amit Pundir [Mon, 16 May 2016 07:55:35 +0000 (13:25 +0530)]
cpufreq: interactive: drop cpufreq_{get,put}_global_kobject func calls
Upstream commit
8eec1020f0c0 (cpufreq: create cpu/cpufreq at boot time)
make sure that cpufreq sysfs entry get created at boot time, and there
is no need to create/destroy it on need basis anymore.
So drop deprecated cpufreq_{get,put}_global_kobject function calls which
otherwise result in following compilation errors:
drivers/cpufreq/cpufreq_interactive.c: In function 'cpufreq_governor_interactive':
drivers/cpufreq/cpufreq_interactive.c:1187:4: error: implicit declaration of function 'cpufreq_get_global_kobject' [-Werror=implicit-function-declaration]
WARN_ON(cpufreq_get_global_kobject());
^
drivers/cpufreq/cpufreq_interactive.c:1197:5: error: implicit declaration of function 'cpufreq_put_global_kobject'[-Werror=implicit-function-declaration]
cpufreq_put_global_kobject();
^
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Amit Pundir [Mon, 16 May 2016 07:47:28 +0000 (13:17 +0530)]
Revert "cpufreq: interactive: build fixes for 4.4"
This reverts commit
bc68f6c4efbd4ddbb15817203f18b7941d9ffd52.
This build fix broke the Interactive Gov at runtime with duplicate sysfs
entry warnings at boot time. We no longer need to this create/destroy
cpufreq sysfs entry at run time on need basis thanks to upstream commit
8eec1020f0c0 (cpufreq: create cpu/cpufreq at boot time) which creates it
at boot time. Hence drop this build fix.
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
John Stultz [Thu, 12 May 2016 18:17:52 +0000 (11:17 -0700)]
xt_qtaguid: Fix panic caused by processing non-full socket.
In an issue very similar to
4e461c777e3 (xt_qtaguid: Fix panic
caused by synack processing), we were seeing panics on occasion
in testing.
In this case, it was the same issue, but caused by a different
call path, as the sk being returned from qtaguid_find_sk() was
not a full socket. Resulting in the sk->sk_socket deref to fail.
This patch adds an extra check to ensure the sk being retuned
is a full socket, and if not it returns NULL.
Reported-by: Milosz Wasilewski <milosz.wasilewski@linaro.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Dmitry Shmidt [Wed, 11 May 2016 18:01:02 +0000 (11:01 -0700)]
fiq_debugger: Add fiq_debugger.disable option
This change allows to use same kernel image with
different console options for uart and fiq_debugger.
If fiq_debugger.disable will be set to 1/y/Y,
fiq_debugger will not be initialized.
Change-Id: I71fda54f5f863d13b1437b1f909e52dd375d002d
Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
Janis Danisevskis [Thu, 14 Apr 2016 12:57:03 +0000 (13:57 +0100)]
UPSTREAM: procfs: fixes pthread cross-thread naming if !PR_DUMPABLE
The PR_DUMPABLE flag causes the pid related paths of the
proc file system to be owned by ROOT. The implementation
of pthread_set/getname_np however needs access to
/proc/<pid>/task/<tid>/comm.
If PR_DUMPABLE is false this implementation is locked out.
This patch installs a special permission function for
the file "comm" that grants read and write access to
all threads of the same group regardless of the ownership
of the inode. For all other threads the function falls back
to the generic inode permission check.
Signed-off-by: Janis Danisevskis <jdanis@google.com>
Jimmy Perchet [Mon, 9 May 2016 17:32:04 +0000 (10:32 -0700)]
FROMLIST: wlcore: Disable filtering in AP role
When you configure (set it up) a STA interface, the driver
install a multicast filter. This is normal behavior, when
one application subscribe to multicast address the filter
is updated. When Access Point interface is configured, there
is no filter installation and the "filter update" path is
disabled in the driver.
The problem happens when you switch an interface from STA
type to AP type. The filter is installed but there are no
means to update it.
Change-Id: Ied22323af831575303abd548574918baa9852dd0
Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
Lianwei Wang [Fri, 6 May 2016 07:17:57 +0000 (00:17 -0700)]
Revert "drivers: power: Add watchdog timer to catch drivers which lockup during suspend."
This reverts commit
ad86cc8ad63229eeeba0628e99f2f59df55a25fd.
Commit
70fea60d888d ("PM / Sleep: Detect device suspend/resume lockup...")
added a suspend/resume watchdog timer to catch the lockup. Let's revert the
duplicate one.
Change-Id: Ic72a87432e27844155467817600adc6cf0c2209c
Signed-off-by: Lianwei Wang <lianwei.wang@gmail.com>
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Dmitry Shmidt [Wed, 4 May 2016 20:51:38 +0000 (13:51 -0700)]
fiq_debugger: Add option to apply uart overlay by FIQ_DEBUGGER_UART_OVERLAY
fiq_debugger is taking over uart, so it is necessary to disable
original uart in DT file. It can be done manually or by overlay.
Change-Id: I9f50ec15b0e22e602d73b9f745fc8666f8925d09
Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
Amit Pundir [Wed, 4 May 2016 05:44:16 +0000 (11:14 +0530)]
Revert "Recreate asm/mach/mmc.h include file"
This reverts commit
5b42ae3edab6c39c1337d36881d29350bb36dcff.
This recereated arch/arm/include/asm/mach/mmc.h include file has
no active user in android-4.x kernels. Also all the necessary bits
are already moved to include/linux/amba/mmci.h.
6ef297f86b62 (ARM: 5720/1: Move MMCI header to amba include dir)
Change-Id: Ibf258b355d17f54f49b777a8f6e0089e9b59a3a5
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Amit Pundir [Wed, 4 May 2016 05:35:15 +0000 (11:05 +0530)]
Revert "ARM: Add 'card_present' state to mmc_platfrom_data"
This reverts commit
541632275e983573b8250fcd4402f772d7bd1e6f.
mmc_platform_data (or arch/arm/include/asm/mach/mmc.h in general)
has no active user in android-4.x kernels. Also all the necessary
bits are already moved to include/linux/amba/mmci.h.
6ef297f86b62 (ARM: 5720/1: Move MMCI header to amba include dir)
Change-Id: Iff384eb527327bf88543408e0257241c1fd99a43
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Jack Pham [Wed, 23 Mar 2016 20:18:03 +0000 (13:18 -0700)]
usb: dual-role: make stub functions inline
If CONFIG_DUAL_ROLE_USB_INTF is disabled but the exported functions
are referenced, the build will result in warnings such as:
In file included from include/linux/usb/class-dual-role.h:112:13:
warning: ‘dual_role_instance_changed’ defined but not used
[-Wunused-function]
These stub functions should be static inline.
Change-Id: I5a9ef58dca32306fac5a4c7f28cdaa36fa8ae078
Signed-off-by: Jack Pham <jackp@codeaurora.org>
(cherry picked from commit
2d152dbb0743526b21d6bbefe097f874c027f860)
(cherry picked from commit
8ad66cafaa10e6ba94ff79a8dbc2cc437c6bfe93)
Amit Pundir [Mon, 2 May 2016 10:02:15 +0000 (15:32 +0530)]
Revert "mmc: Add status IRQ and status callback function to mmc platform data"
This reverts commit
91fa97e1e5c001d52f6c993d37be08d1e84f47b7.
This patch is no longer valid. There are no users for this status irq and
callback in android-4.x. The Qcom platform (mach-msm/qsd8x50, HTC Dream..)
and SDCC controller (msm_sdcc) using this status IRQ and callback are
dropped from mainline sometime back.
27842bb18b00 (mmc: Remove msm_sdcc driver)
c0c89fafa289 (ARM: Remove mach-msm and associated ARM architecture code)
Change-Id: Ia38e42a06dc184395f79c1ec1d306bf9775704d5
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Yongqin Liu [Thu, 28 Apr 2016 05:53:36 +0000 (13:53 +0800)]
quick selinux support for tracefs
Here is just the quick fix for tracefs with selinux.
just add tracefs to the list of whitelisted filesystem
types in selinux_is_sblabel_mnt(), but the right fix would be to
generalize this logic as described in the last item on the todo list,
https://bitbucket.org/seandroid/wiki/wiki/ToDo
Change-Id: I2aa803ccffbcd2802a7287514da7648e6b364157
Signed-off-by: Yongqin Liu <yongqin.liu@linaro.org>
Amit Pundir [Tue, 26 Apr 2016 09:17:53 +0000 (14:47 +0530)]
Revert "hid-multitouch: Filter collections by application usage."
This reverts commit
0840b80cb9626906b57df54e7229db60f9aea4f2.
This patch is already upstreamed in v4.4, commit
658d4aed59b3 (HID: hid-multitouch: Filter collections by application usage.),
and further fixed/cleaned up afterwards in commits
c2ef8f21ea8f (HID: multitouch: add support for trackpads),
76f5902aebda (HID: hid-multitouch: Simplify setup and frame synchronization) et al.
By having this duplicate patch in AOSP we are doing redundant
checks for Touchscreen and Touchpad devices.
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Amit Pundir [Tue, 26 Apr 2016 09:44:35 +0000 (15:14 +0530)]
Revert "HID: steelseries: validate output report details"
This reverts commit
90037b2720acffa6da2269a10ecf24ec2dace89b.
Remove duplicate code. This patch is already upstreamed in v4.4,
commit
41df7f6d4372 (HID: steelseries: validate output report details).
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
John Stultz [Sat, 23 Apr 2016 00:12:57 +0000 (17:12 -0700)]
xt_qtaguid: Fix panic caused by synack processing
In upstream commit
ca6fb06518836ef9b65dc0aac02ff97704d52a05
(tcp: attach SYNACK messages to request sockets instead of
listener)
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=
ca6fb0651883
The building of synack messages was changed, which made it so
the skb->sk points to a casted request_sock. This is problematic,
as there is no sk_socket in a request_sock. So when the qtaguid_mt
function tries to access the sk->sk_socket, it accesses uninitialized
memory.
After looking at how other netfilter implementations handle this,
I realized there was a skb_to_full_sk() helper added, which the
xt_qtaguid code isn't yet using.
This patch adds its use, and resovles panics seen when accessing
uninitialzed memory when processing synack packets.
Reported-by: YongQin Liu <yongquin.liu@linaro.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Dmitry Shmidt [Mon, 25 Apr 2016 21:28:30 +0000 (14:28 -0700)]
Revert "mm: vmscan: Add a debug file for shrinkers"
Kernel panic when type "cat /sys/kernel/debug/shrinker"
Unable to handle kernel paging request at virtual address
0af37d40
pgd =
d4dec000
[
0af37d40] *pgd=
00000000
Internal error: Oops: 5 [#1] PREEMPT SMP ARM
[<
c0bb8f24>] (_raw_spin_lock) from [<
c020aa08>] (list_lru_count_one+0x14/0x28)
[<
c020aa08>] (list_lru_count_one) from [<
c02309a8>] (super_cache_count+0x40/0xa0)
[<
c02309a8>] (super_cache_count) from [<
c01f6ab0>] (debug_shrinker_show+0x50/0x90)
[<
c01f6ab0>] (debug_shrinker_show) from [<
c024fa5c>] (seq_read+0x1ec/0x48c)
[<
c024fa5c>] (seq_read) from [<
c022e8f8>] (__vfs_read+0x20/0xd0)
[<
c022e8f8>] (__vfs_read) from [<
c022f0d0>] (vfs_read+0x7c/0x104)
[<
c022f0d0>] (vfs_read) from [<
c022f974>] (SyS_read+0x44/0x9c)
[<
c022f974>] (SyS_read) from [<
c0107580>] (ret_fast_syscall+0x0/0x3c)
Code:
e1a04000 e3a00001 ebd66b39 f594f000 (
e1943f9f)
---[ end trace
60c74014a63a9688 ]---
Kernel panic - not syncing: Fatal exception
shrink_control.nid is used but not initialzed, same for
shrink_control.memcg.
This reverts commit
b0e7a582b2264cdf75874dcd8df915b6b4427755.
Change-Id: I108de88fa4baaef99a53c4e4c6a1d8c4b4804157
Reported-by: Xiaowen Liu <xiaowen.liu@freescale.com>
Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
Amit Pundir [Tue, 26 Apr 2016 10:21:20 +0000 (15:51 +0530)]
Revert "SELinux: Enable setting security contexts on rootfs inodes."
This reverts commit
78d36d2111cd4ca722a602846f7db8f54a0b074c.
Drop this duplicate patch. This patch is already upstreamed in v4.4. Commits
5c73fceb8c70 (SELinux: Enable setting security contexts on rootfs inodes.),
12f348b9dcf6 (SELinux: rename SE_SBLABELSUPP to SBLABEL_MNT), and
b43e725d8d38 (SELinux: use a helper function to determine seclabel),
for reference.
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Amit Pundir [Tue, 26 Apr 2016 10:21:06 +0000 (15:51 +0530)]
Revert "SELinux: build fix for 4.1"
This reverts commit
43e1b4f528e1654fadd1097f7cc5c50be6e45b77.
This patch is part of code which is already upstreamed in v4.4. Commits
5c73fceb8c70 (SELinux: Enable setting security contexts on rootfs inodes.),
12f348b9dcf6 (SELinux: rename SE_SBLABELSUPP to SBLABEL_MNT), and
b43e725d8d38 (SELinux: use a helper function to determine seclabel).
for reference.
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Daniel Rosenberg [Fri, 22 Apr 2016 07:00:48 +0000 (00:00 -0700)]
fuse: Add support for d_canonical_path
Allows FUSE to report to inotify that it is acting
as a layered filesystem. The userspace component
returns a string representing the location of the
underlying file. If the string cannot be resolved
into a path, the top level path is returned instead.
bug:
23904372
Change-Id: Iabdca0bbedfbff59e9c820c58636a68ef9683d9f
Signed-off-by: Daniel Rosenberg <drosen@google.com>
Daniel Rosenberg [Fri, 22 Apr 2016 07:00:14 +0000 (00:00 -0700)]
vfs: change d_canonical_path to take two paths
bug:
23904372
Change-Id: I4a686d64b6de37decf60019be1718e1d820193e6
Signed-off-by: Daniel Rosenberg <drosen@google.com>
Amit Pundir [Mon, 25 Apr 2016 18:25:44 +0000 (23:55 +0530)]
android: recommended.cfg: remove CONFIG_UID_STAT
Remove UID Stat driver.
Change-Id: Ifc9d2c6fe27900f30e6407398f5b24222518bffc
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Amit Pundir [Thu, 1 Oct 2015 05:14:36 +0000 (10:44 +0530)]
netfilter: xt_qtaguid: seq_printf fixes
Update seq_printf() usage in xt_qtaguid to align
with changes from mainline commit
6798a8caaf64
"fs/seq_file: convert int seq_vprint/seq_printf/etc...
returns to void".
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Amit Pundir [Mon, 25 Apr 2016 11:38:15 +0000 (17:08 +0530)]
Revert "misc: uidstat: Adding uid stat driver to collect network statistics."
This reverts commit
6b6d5fbf9ae567aefb58099a30bbb6d25fa8925b.
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Amit Pundir [Mon, 25 Apr 2016 11:31:08 +0000 (17:01 +0530)]
Revert "net: activity_stats: Add statistics for network transmission activity"
This reverts commit
afedd7beba14385fd797166751fde39e0f52cf72.
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Amit Pundir [Mon, 25 Apr 2016 11:30:57 +0000 (17:00 +0530)]
Revert "net: activity_stats: Stop using obsolete create_proc_read_entry api"
This reverts commit
7c121720fa14889d59e933aad0a8b9ce948a39ae.
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Amit Pundir [Mon, 25 Apr 2016 11:30:43 +0000 (17:00 +0530)]
Revert "misc: uidstat: avoid create_stat() race and blockage."
This reverts commit
f7a812174033fe620509e6e8ca7022abd924b1c4.
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Amit Pundir [Mon, 25 Apr 2016 11:30:31 +0000 (17:00 +0530)]
Revert "misc: uidstat: Remove use of obsolete create_proc_read_entry api"
This reverts commit
fccab646d33381af63e4f4c0d4f309a1d2b4b0c3.
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Amit Pundir [Mon, 25 Apr 2016 11:30:08 +0000 (17:00 +0530)]
Revert "misc seq_printf fixes for 4.4"
This reverts commit
5c7566a29bff14166d952f2ea525d5231546f821.
This patch revert some changes in net/netfilter/xt_qtaguid.c as well.
I'll submit another patch to restore those changes.
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Amit Pundir [Mon, 25 Apr 2016 11:28:20 +0000 (16:58 +0530)]
Revert "misc: uid_stat: Include linux/atomic.h instead of asm/atomic.h"
This reverts commit
8d3a6c1538fb021448c4f6381f6191061f947ba1.
This series of patches revert AOSP UID_STAT and NET_ACTIVITY_STATS drivers.
I could not find any meaningful usage of these interfaces in AOSP master.
UID_STAT driver expose "/proc/uid_stat/*" interfaces but it is only
used in AOSP master as in what appears be an out of date bandwidth
test in frameworks/base and in somewhat recent battery utils test
in external/chromium-trace project.
NET_ACTIVITY_STATS driver expose "/proc/net/stat/activity" interface
but I can not track its usage anywhere in AOSP at all.
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Dmitry Shmidt [Thu, 21 Apr 2016 22:47:01 +0000 (15:47 -0700)]
Revert "net: socket ioctl to reset connections matching local address"
Use SOCK_DESTROY from now instead of SIOCKILLADDR
This reverts commit
38f0ec724f5306c81130ca9343c856aa37a76d54.
Change-Id: I2dcd833b66c88a48de8978dce9d72ab78f9af549
Dmitry Shmidt [Thu, 21 Apr 2016 22:44:25 +0000 (15:44 -0700)]
Revert "net: fix iterating over hashtable in tcp_nuke_addr()"
This reverts commit
4747299b2c8e8778927b3df0501023d76fe4f2d5.
Dmitry Shmidt [Thu, 21 Apr 2016 22:44:11 +0000 (15:44 -0700)]
Revert "net: fix crash in tcp_nuke_addr()"
This reverts commit
08f7c4280cd5efe9e274240c42177f459431bac2.
Dmitry Shmidt [Thu, 21 Apr 2016 22:43:58 +0000 (15:43 -0700)]
Revert "Don't kill IPv4 sockets when killing IPv6 sockets was requested."
This reverts commit
8bf4413b4f54e24120b90ecbfee426beeddc3ff0.
Dmitry Shmidt [Thu, 21 Apr 2016 22:43:29 +0000 (15:43 -0700)]
Revert "tcp: Fix IPV6 module build errors"
This reverts commit
3823c8136f2170b3ac5e6a5f8b857746a786e845.
Dmitry Shmidt [Tue, 19 Apr 2016 19:44:42 +0000 (12:44 -0700)]
android: base-cfg: remove CONFIG_SWITCH
Change-Id: I3fd1aa7a54fe3a8d3ad5537cbc61386e52f41ea0
Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
Dmitry Shmidt [Tue, 19 Apr 2016 19:37:47 +0000 (12:37 -0700)]
Revert "switch: switch class and GPIO drivers."
Drivers should use extcon moving forward.
Documentation/extcon/porting-android-switch-class describes
how to port existing switch class drivers to extcon.
This reverts commit
e4b8e66e0ae2e78e913d7b86f2507fdb0aa731b4.
Change-Id: I5b622c7ab4c0cb9670f8903f259a99888f503c1a
Dmitry Shmidt [Tue, 19 Apr 2016 19:37:31 +0000 (12:37 -0700)]
Revert "drivers: switch: remove S_IWUSR from dev_attr"
This reverts commit
dc66dee02dcd6ea774e3ed4ae32e88b0f3b4bee7.
Amit Pundir [Mon, 11 Apr 2016 19:49:24 +0000 (01:19 +0530)]
ANDROID: base-cfg: enable CONFIG_IP_NF_NAT
IP_NF_TARGET_{MASQUERADE,NETMAP,REDIRECT} configs,
already enabled in android-base.cfg for tethering,
are of no use if CONFIG_IP_NF_NAT is not enabled.
Don't rely on platform config for that and enable
CONFIG_IP_NF_NAT in android-base.cfg as well.
Change-Id: Ic72bcebbd925b142b09539466bf963188c83108a
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Jeff Vander Stoep [Tue, 5 Apr 2016 20:06:27 +0000 (13:06 -0700)]
BACKPORT: selinux: restrict kernel module loading
Backport notes:
Backport uses kernel_module_from_file not kernel_read_file hook.
kernel_read_file replaced kernel_module_from_file in the 4.6 kernel.
There are no inode_security_() helper functions (also introduced in
4.6) so the inode lookup is done using the file_inode() helper which
is standard for kernel version < 4.6.
(Cherry picked from commit
61d612ea731e57dc510472fb746b55cdc017f371)
Utilize existing kernel_read_file hook on kernel module load.
Add module_load permission to the system class.
Enforces restrictions on kernel module origin when calling the
finit_module syscall. The hook checks that source type has
permission module_load for the target type.
Example for finit_module:
allow foo bar_file:system module_load;
Similarly restrictions are enforced on kernel module loading when
calling the init_module syscall. The hook checks that source
type has permission module_load with itself as the target object
because the kernel module is sourced from the calling process.
Example for init_module:
allow foo foo:system module_load;
Bug:
27824855
Change-Id: I64bf3bd1ab2dc735321160642dc6bbfa996f8068
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Signed-off-by: Paul Moore <paul@paul-moore.com>
Rom Lemarchand [Thu, 7 Apr 2016 14:19:34 +0000 (07:19 -0700)]
android: base-cfg: enable CONFIG_QUOTA
Bug:
28032718
Change-Id: I7cb6b641f72085e69b90dca11d2ea68adcd02390
(cherry picked from commit
e1b53a388e9cfcf870520a6899a37456cf1ae2c6)
Alex Shi [Thu, 12 May 2016 04:20:40 +0000 (12:20 +0800)]
Merge branch 'linux-linaro-lsk-v4.4' into linux-linaro-lsk-v4.4-android
Alex Shi [Thu, 12 May 2016 04:20:36 +0000 (12:20 +0800)]
Merge tag 'v4.4.10' into linux-linaro-lsk-v4.4
This is the 4.4.10 stable release
Alex Shi [Thu, 12 May 2016 01:27:18 +0000 (09:27 +0800)]
Merge branch 'linux-linaro-lsk-v4.4' into linux-linaro-lsk-v4.4-android
Alex Shi [Thu, 12 May 2016 01:25:41 +0000 (09:25 +0800)]
Merge branch 'v4.4/topic/mm-kaslr' into linux-linaro-lsk-v4.4
Helge Deller [Wed, 23 Mar 2016 15:00:46 +0000 (16:00 +0100)]
parisc: Use generic extable search and sort routines
Switch to the generic extable search and sort routines which were introduced
with commit
a272858 from Ard Biesheuvel. This saves quite some memory in the
vmlinux binary with the 64bit kernel.
Signed-off-by: Helge Deller <deller@gmx.de>
(cherry picked from commit
0de798584bdedfdad19db21e3c7aec84f252f4f3)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Catalin Marinas [Thu, 10 Mar 2016 18:30:56 +0000 (18:30 +0000)]
arm64: kasan: Use actual memory node when populating the kernel image shadow
With the 16KB or 64KB page configurations, the generic
vmemmap_populate() implementation warns on potential offnode
page_structs via vmemmap_verify() because the arm64 kasan_init() passes
NUMA_NO_NODE instead of the actual node for the kernel image memory.
Fixes: f9040773b7bb ("arm64: move kernel image to base of vmalloc area")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: James Morse <james.morse@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
(cherry picked from commit
2f76969f2eef051bdd63d38b08d78e790440b0ad)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Fri, 26 Feb 2016 16:57:14 +0000 (17:57 +0100)]
arm64: mm: treat memstart_addr as a signed quantity
Commit
c031a4213c11 ("arm64: kaslr: randomize the linear region")
implements randomization of the linear region, by subtracting a random
multiple of PUD_SIZE from memstart_addr. This causes the virtual mapping
of system RAM to move upwards in the linear region, and at the same time
causes memstart_addr to assume a value which may be negative if the offset
of system RAM in the physical space is smaller than its offset relative to
PAGE_OFFSET in the virtual space.
Since memstart_addr is effectively an offset now, redefine its type as s64
so that expressions involving shifting or division preserve its sign.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
020d044f66874eba058ce8264fc550f3eca67879)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Thu, 25 Feb 2016 19:48:53 +0000 (20:48 +0100)]
arm64: lse: deal with clobbered IP registers after branch via PLT
The LSE atomics implementation uses runtime patching to patch in calls
to out of line non-LSE atomics implementations on cores that lack hardware
support for LSE. To avoid paying the overhead cost of a function call even
if no call ends up being made, the bl instruction is kept invisible to the
compiler, and the out of line implementations preserve all registers, not
just the ones that they are required to preserve as per the AAPCS64.
However, commit
fd045f6cd98e ("arm64: add support for module PLTs") added
support for routing branch instructions via veneers if the branch target
offset exceeds the range of the ordinary relative branch instructions.
Since this deals with jump and call instructions that are exposed to ELF
relocations, the PLT code uses x16 to hold the address of the branch target
when it performs an indirect branch-to-register, something which is
explicitly allowed by the AAPCS64 (and ordinary compiler generated code
does not expect register x16 or x17 to retain their values across a bl
instruction).
Since the lse runtime patched bl instructions don't adhere to the AAPCS64,
they don't deal with this clobbering of registers x16 and x17. So add them
to the clobber list of the asm() statements that perform the call
instructions, and drop x16 and x17 from the list of registers that are
callee saved in the out of line non-LSE implementations.
In addition, since we have given these functions two scratch registers,
they no longer need to stack/unstack temp registers.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: factored clobber list into #define, updated Makefile comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
5be8b70af1ca78cefb8b756d157532360a5fd663)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Wed, 2 Mar 2016 08:47:13 +0000 (09:47 +0100)]
arm64: mm: check at build time that PAGE_OFFSET divides the VA space evenly
Commit
8439e62a1561 ("arm64: mm: use bit ops rather than arithmetic in
pa/va translations") changed the boundary check against PAGE_OFFSET from
an arithmetic comparison to a bit test. This means we now silently assume
that PAGE_OFFSET is a power of 2 that divides the kernel virtual address
space into two equal halves. So make that assumption explicit.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
6d2aa549de1fc998581d216de3853aa131aa4446)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Catalin Marinas [Thu, 10 Mar 2016 18:41:16 +0000 (18:41 +0000)]
arm64: kasan: Fix zero shadow mapping overriding kernel image shadow
With the 16KB and 64KB page size configurations, SWAPPER_BLOCK_SIZE is
PAGE_SIZE and ARM64_SWAPPER_USES_SECTION_MAPS is 0. Since
kimg_shadow_end is not page aligned (_end shifted by
KASAN_SHADOW_SCALE_SHIFT), the edges of previously mapped kernel image
shadow via vmemmap_populate() may be overridden by subsequent calls to
kasan_populate_zero_shadow(), leading to kernel panics like below:
------------------------------------------------------------------------------
Unable to handle kernel paging request at virtual address
fffffc100135068c
pgd =
fffffc8009ac0000
[
fffffc100135068c] *pgd=
00000009ffee0003, *pud=
00000009ffee0003, *pmd=
00000009ffee0003, *pte=
00e0000081a00793
Internal error: Oops:
9600004f [#1] PREEMPT SMP
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.5.0-rc4+ #1984
Hardware name: Juno (DT)
task:
fffffe09001a0000 ti:
fffffe0900200000 task.ti:
fffffe0900200000
PC is at __memset+0x4c/0x200
LR is at kasan_unpoison_shadow+0x34/0x50
pc : [<
fffffc800846f1cc>] lr : [<
fffffc800821ff54>] pstate:
00000245
sp :
fffffe0900203db0
x29:
fffffe0900203db0 x28:
0000000000000000
x27:
0000000000000000 x26:
0000000000000000
x25:
fffffc80099b69d0 x24:
0000000000000001
x23:
0000000000000000 x22:
0000000000002000
x21:
dffffc8000000000 x20:
1fffff9001350a8c
x19:
0000000000002000 x18:
0000000000000008
x17:
0000000000000147 x16:
ffffffffffffffff
x15:
79746972100e041d x14:
ffffff0000000000
x13:
ffff000000000000 x12:
0000000000000000
x11:
0101010101010101 x10:
1fffffc11c000000
x9 :
0000000000000000 x8 :
fffffc100135068c
x7 :
0000000000000000 x6 :
000000000000003f
x5 :
0000000000000040 x4 :
0000000000000004
x3 :
fffffc100134f651 x2 :
0000000000000400
x1 :
0000000000000000 x0 :
fffffc100135068c
Process swapper/0 (pid: 1, stack limit = 0xfffffe0900200020)
Call trace:
[<
fffffc800846f1cc>] __memset+0x4c/0x200
[<
fffffc8008220044>] __asan_register_globals+0x5c/0xb0
[<
fffffc8008a09d34>] _GLOBAL__sub_I_65535_1_sunrpc_cache_lookup+0x1c/0x28
[<
fffffc8008f20d28>] kernel_init_freeable+0x104/0x274
[<
fffffc80089e1948>] kernel_init+0x10/0xf8
[<
fffffc8008093a00>] ret_from_fork+0x10/0x50
------------------------------------------------------------------------------
This patch aligns kimg_shadow_start and kimg_shadow_end to
SWAPPER_BLOCK_SIZE in all configurations.
Fixes: f9040773b7bb ("arm64: move kernel image to base of vmalloc area")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
(cherry picked from commit
2776e0e8ef683a42fe3e9a5facf576b73579700e)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Mark Rutland [Tue, 22 Mar 2016 10:11:45 +0000 (10:11 +0000)]
arm64: consistently use p?d_set_huge
Commit
324420bf91f60582 ("arm64: add support for ioremap() block
mappings") added new p?d_set_huge functions which do the hard work to
generate and set a correct block entry.
These differ from open-coded huge page creation in the early page table
code by explicitly setting the P?D_TYPE_SECT bits (which are implicitly
retained by mk_sect_prot() for any valid prot), but are otherwise
identical (and cannot fail on arm64).
For simplicity and consistency, make use of these in the initial page
table creation code.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
c661cb1c537e2364bfdabb298fb934fd77445e98)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Mark Rutland [Tue, 15 Mar 2016 11:22:57 +0000 (11:22 +0000)]
arm64: fix KASLR boot-time I-cache maintenance
Commit
f80fb3a3d50843a4 ("arm64: add support for kernel ASLR") missed a
DSB necessary to complete I-cache maintenance in the primary boot path,
and hence stale instructions may still be present in the I-cache and may
be executed until the I-cache maintenance naturally completes.
Since commit
8ec41987436d566f ("arm64: mm: ensure patched kernel text is
fetched from PoU"), all CPUs invalidate their I-caches after their MMU
is enabled. Prior a CPU's MMU having been enabled, arbitrary lines may
have been fetched from the PoC into I-caches. We never patch text
expected to be executed with the MMU off. Thus, it is unnecessary to
perform broadcast I-cache maintenance in the primary boot path.
This patch reduces the scope of the I-cache maintenance to the local
CPU, and adds the missing DSB with similar scope, matching prior
maintenance in the primary boot path.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesehvuel <ard.biesheuvel@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
b90b4a608ea2401cc491828f7a385edd2e236e37)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Wed, 9 Mar 2016 15:22:55 +0000 (15:22 +0000)]
arm64: hugetlb: partial revert of
66b3923a1a0f
Commit
66b3923a1a0f ("arm64: hugetlb: add support for PTE contiguous bit")
introduced support for huge pages using the contiguous bit in the PTE
as opposed to block mappings, which may be slightly unwieldy (512M) in
64k page configurations.
Unfortunately, this support has resulted in some late regressions when
running the libhugetlbfs test suite with 64k pages and CONFIG_DEBUG_VM
as a result of a BUG:
| readback (2M: 64): ------------[ cut here ]------------
| kernel BUG at fs/hugetlbfs/inode.c:446!
| Internal error: Oops - BUG: 0 [#1] SMP
| Modules linked in:
| CPU: 7 PID: 1448 Comm: readback Not tainted 4.5.0-rc7 #148
| Hardware name: linux,dummy-virt (DT)
| task:
fffffe0040964b00 ti:
fffffe00c2668000 task.ti:
fffffe00c2668000
| PC is at remove_inode_hugepages+0x44c/0x480
| LR is at remove_inode_hugepages+0x264/0x480
Rather than revert the entire patch, simply avoid advertising the
contiguous huge page sizes for now while people are actively working on
a fix. This patch can then be reverted once things have been sorted out.
Cc: David Woods <dwoods@ezchip.com>
Reported-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit
ff7925848b50050732ac0401e0acf27e8b241d7b)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Yang Shi [Thu, 11 Feb 2016 21:53:10 +0000 (13:53 -0800)]
arm64: make irq_stack_ptr more robust
Switching between stacks is only valid if we are tracing ourselves while on the
irq_stack, so it is only valid when in current and non-preemptible context,
otherwise is is just zeroed off.
Fixes: 132cd887b5c5 ("arm64: Modify stack trace and dump for use with irq_stack")
Acked-by: James Morse <james.morse@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Signed-off-by: Yang Shi <yang.shi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit
a80a0eb70c358f8c7dda4bb62b2278dc6285217b)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Tue, 26 Jan 2016 13:48:29 +0000 (14:48 +0100)]
arm64: efi: invoke EFI_RNG_PROTOCOL to supply KASLR randomness
Since arm64 does not use a decompressor that supplies an execution
environment where it is feasible to some extent to provide a source of
randomness, the arm64 KASLR kernel depends on the bootloader to supply
some random bits in the /chosen/kaslr-seed DT property upon kernel entry.
On UEFI systems, we can use the EFI_RNG_PROTOCOL, if supplied, to obtain
some random bits. At the same time, use it to randomize the offset of the
kernel Image in physical memory.
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
2b5fe07a78a09a32002642b8a823428ade611f16)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Mon, 11 Jan 2016 10:47:49 +0000 (11:47 +0100)]
efi: stub: use high allocation for converted command line
Before we can move the command line processing before the allocation
of the kernel, which is required for detecting the 'nokaslr' option
which controls that allocation, move the converted command line higher
up in memory, to prevent it from interfering with the kernel itself.
Since x86 needs the address to fit in 32 bits, use UINT_MAX as the upper
bound there. Otherwise, use ULONG_MAX (i.e., no limit)
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
48fcb2d0216103d15306caa4814e2381104df6d8)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Mon, 11 Jan 2016 09:43:16 +0000 (10:43 +0100)]
efi: stub: add implementation of efi_random_alloc()
This implements efi_random_alloc(), which allocates a chunk of memory of
a certain size at a certain alignment, and uses the random_seed argument
it receives to randomize the address of the allocation.
This is implemented by iterating over the UEFI memory map, counting the
number of suitable slots (aligned offsets) within each region, and picking
a random number between 0 and 'number of slots - 1' to select the slot,
This should guarantee that each possible offset is chosen equally likely.
Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
2ddbfc81eac84a299cb4747a8764bc43f23e9008)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Sun, 10 Jan 2016 10:29:07 +0000 (11:29 +0100)]
efi: stub: implement efi_get_random_bytes() based on EFI_RNG_PROTOCOL
This exposes the firmware's implementation of EFI_RNG_PROTOCOL via a new
function efi_get_random_bytes().
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
e4fbf4767440472f9d23b0f25a2b905e1c63b6a8)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Fri, 29 Jan 2016 10:59:03 +0000 (11:59 +0100)]
arm64: kaslr: randomize the linear region
When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), and entropy has been
provided by the bootloader, randomize the placement of RAM inside the
linear region if sufficient space is available. For instance, on a 4KB
granule/3 levels kernel, the linear region is 256 GB in size, and we can
choose any 1 GB aligned offset that is far enough from the top of the
address space to fit the distance between the start of the lowest memblock
and the top of the highest memblock.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
c031a4213c11a5db475f528c182f7b3858df11db)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Tue, 26 Jan 2016 13:12:01 +0000 (14:12 +0100)]
arm64: add support for kernel ASLR
This adds support for KASLR is implemented, based on entropy provided by
the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
of the address space (VA_BITS) and the page size, the entropy in the
virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
4 levels), with the sidenote that displacements that result in the kernel
image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
granule kernels, respectively) are not allowed, and will be rounded up to
an acceptable value.
If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
randomized independently from the core kernel. This makes it less likely
that the location of core kernel data structures can be determined by an
adversary, but causes all function calls from modules into the core kernel
to be resolved via entries in the module PLTs.
If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
randomized by choosing a page aligned 128 MB region inside the interval
[_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
entropy (depending on page size), independently of the kernel randomization,
but still guarantees that modules are within the range of relative branch
and jump instructions (with the caveat that, since the module region is
shared with other uses of the vmalloc area, modules may need to be loaded
further away if the module region is exhausted)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
f80fb3a3d50843a401dac4b566b3b131da8077a2)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Tue, 26 Jan 2016 08:13:44 +0000 (09:13 +0100)]
arm64: add support for building vmlinux as a relocatable PIE binary
This implements CONFIG_RELOCATABLE, which links the final vmlinux
image with a dynamic relocation section, allowing the early boot code
to perform a relocation to a different virtual address at runtime.
This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE).
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
1e48ef7fcc374051730381a2a05da77eb4eafdb0)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Fri, 1 Jan 2016 14:02:12 +0000 (15:02 +0100)]
arm64: switch to relative exception tables
Instead of using absolute addresses for both the exception location
and the fixup, use offsets relative to the exception table entry values.
Not only does this cut the size of the exception table in half, it is
also a prerequisite for KASLR, since absolute exception table entries
are subject to dynamic relocation, which is incompatible with the sorting
of the exception table that occurs at build time.
This patch also introduces the _ASM_EXTABLE preprocessor macro (which
exists on x86 as well) and its _asm_extable assembly counterpart, as
shorthands to emit exception table entries.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
6c94f27ac847ff8ef15b3da5b200574923bd6287)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Fri, 1 Jan 2016 11:39:09 +0000 (12:39 +0100)]
extable: add support for relative extables to search and sort routines
This adds support to the generic search_extable() and sort_extable()
implementations for dealing with exception table entries whose fields
contain relative offsets rather than absolute addresses.
Acked-by: Helge Deller <deller@gmx.de>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
a272858a3c1ecd4a935ba23c66668f81214bd110)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Sun, 10 Jan 2016 10:42:28 +0000 (11:42 +0100)]
scripts/sortextable: add support for ET_DYN binaries
Add support to scripts/sortextable for handling relocatable (PIE)
executables, whose ELF type is ET_DYN, not ET_EXEC. Other than adding
support for the new type, no changes are needed.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
7b957b6e603623ef8b2e8222fa94b976df613fa2)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
James Morse [Tue, 2 Feb 2016 15:53:59 +0000 (15:53 +0000)]
arm64: futex.h: Add missing PAN toggling
futex.h's futex_atomic_cmpxchg_inatomic() does not use the
__futex_atomic_op() macro and needs its own PAN toggling. This was missed
when the feature was implemented.
Fixes: 338d4f49d6f ("arm64: kernel: Add support for Privileged Access Never")
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit
811d61e384e24759372bb3f01772f3744b0a8327)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Mon, 11 Jan 2016 16:08:26 +0000 (17:08 +0100)]
arm64: make asm/elf.h available to asm files
This reshuffles some code in asm/elf.h and puts a #ifndef __ASSEMBLY__
around its C definitions so that the CPP defines can be used in asm
source files as well.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
4a2e034e5cdadde4c712f79bdd57d1455c76a3db)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Sat, 26 Dec 2015 11:46:40 +0000 (12:46 +0100)]
arm64: avoid dynamic relocations in early boot code
Before implementing KASLR for arm64 by building a self-relocating PIE
executable, we have to ensure that values we use before the relocation
routine is executed are not subject to dynamic relocation themselves.
This applies not only to virtual addresses, but also to values that are
supplied by the linker at build time and relocated using R_AARCH64_ABS64
relocations.
So instead, use assemble time constants, or force the use of static
relocations by folding the constants into the instructions.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
2bf31a4a05f5b00f37d65ba029d36a0230286cb7)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Sat, 26 Dec 2015 12:48:02 +0000 (13:48 +0100)]
arm64: avoid R_AARCH64_ABS64 relocations for Image header fields
Unfortunately, the current way of using the linker to emit build time
constants into the Image header will no longer work once we switch to
the use of PIE executables. The reason is that such constants are emitted
into the binary using R_AARCH64_ABS64 relocations, which are resolved at
runtime, not at build time, and the places targeted by those relocations
will contain zeroes before that.
So refactor the endian swapping linker script constant generation code so
that it emits the upper and lower 32-bit words separately.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
6ad1fe5d9077a1ab40bf74b61994d2e770b00b14)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Tue, 24 Nov 2015 11:37:35 +0000 (12:37 +0100)]
arm64: add support for module PLTs
This adds support for emitting PLTs at module load time for relative
branches that are out of range. This is a prerequisite for KASLR, which
may place the kernel and the modules anywhere in the vmalloc area,
making it more likely that branch target offsets exceed the maximum
range of +/- 128 MB.
In this version, I removed the distinction between relocations against
.init executable sections and ordinary executable sections. The reason
is that it is hardly worth the trouble, given that .init.text usually
does not contain that many far branches, and this version now only
reserves PLT entry space for jump and call relocations against undefined
symbols (since symbols defined in the same module can be assumed to be
within +/- 128 MB)
For example, the mac80211.ko module (which is fairly sizable at ~400 KB)
built with -mcmodel=large gives the following relocation counts:
relocs branches unique !local
.text 3925 3347 518 219
.init.text 11 8 7 1
.exit.text 4 4 4 1
.text.unlikely 81 67 36 17
('unique' means branches to unique type/symbol/addend combos, of which
!local is the subset referring to undefined symbols)
IOW, we are only emitting a single PLT entry for the .init sections, and
we are better off just adding it to the core PLT section instead.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
fd045f6cd98ec4953147b318418bd45e441e52a3)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Tue, 23 Feb 2016 07:56:45 +0000 (08:56 +0100)]
arm64: move brk immediate argument definitions to separate header
Instead of reversing the header dependency between asm/bug.h and
asm/debug-monitors.h, split off the brk instruction immediate value
defines into a new header asm/brk-imm.h, and include it from both.
This solves the circular dependency issue that prevents BUG() from
being used in some header files, and keeps the definitions together.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
f98deee9a9f8c47d05a0f64d86440882dca772ff)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Mon, 22 Feb 2016 17:46:04 +0000 (18:46 +0100)]
arm64: mm: use bit ops rather than arithmetic in pa/va translations
Since PAGE_OFFSET is chosen such that it cuts the kernel VA space right
in half, and since the size of the kernel VA space itself is always a
power of 2, we can treat PAGE_OFFSET as a bitmask and replace the
additions/subtractions with 'or' and 'and-not' operations.
For the comparison against PAGE_OFFSET, a mov/cmp/branch sequence ends
up getting replaced with a single tbz instruction. For the additions and
subtractions, we save a mov instruction since the mask is folded into the
instruction's immediate field.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
8439e62a15614e8fcd43835d57b7245cd9870dc5)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Mon, 22 Feb 2016 17:46:03 +0000 (18:46 +0100)]
arm64: mm: only perform memstart_addr sanity check if DEBUG_VM
Checking whether memstart_addr has been assigned every time it is
referenced adds a branch instruction that may hurt performance if
the reference in question occurs on a hot path. So only perform the
check if CONFIG_DEBUG_VM=y.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[catalin.marinas@arm.com: replaced #ifdef with VM_BUG_ON]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
a92405f082d43267575444a6927085e4c8a69e4e)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Catalin Marinas [Fri, 19 Feb 2016 14:28:58 +0000 (14:28 +0000)]
arm64: User die() instead of panic() in do_page_fault()
The former gives better error reporting on unhandled permission faults
(introduced by the UAO patches).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
70c8abc28762d04e36c92e07eee2ce6ab41049cb)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:42 +0000 (13:52 +0100)]
arm64: allow kernel Image to be loaded anywhere in physical memory
This relaxes the kernel Image placement requirements, so that it
may be placed at any 2 MB aligned offset in physical memory.
This is accomplished by ignoring PHYS_OFFSET when installing
memblocks, and accounting for the apparent virtual offset of
the kernel Image. As a result, virtual address references
below PAGE_OFFSET are correctly mapped onto physical references
into the kernel Image regardless of where it sits in memory.
Special care needs to be taken for dealing with memory limits passed
via mem=, since the generic implementation clips memory top down, which
may clip the kernel image itself if it is loaded high up in memory. To
deal with this case, we simply add back the memory covering the kernel
image, which may result in more memory to be retained than was passed
as a mem= parameter.
Since mem= should not be considered a production feature, a panic notifier
handler is installed that dumps the memory limit at panic time if one was
set.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
a7f8de168ace487fa7b88cb154e413cf40e87fc6)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:41 +0000 (13:52 +0100)]
arm64: defer __va translation of initrd_start and initrd_end
Before deferring the assignment of memstart_addr in a subsequent patch, to
the moment where all memory has been discovered and possibly clipped based
on the size of the linear region and the presence of a mem= command line
parameter, we need to ensure that memstart_addr is not used to perform __va
translations before it is assigned.
One such use is in the generic early DT discovery of the initrd location,
which is recorded as a virtual address in the globals initrd_start and
initrd_end. So wire up the generic support to declare the initrd addresses,
and implement it without __va() translations, and perform the translation
after memstart_addr has been assigned.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
a89dea585371a9d5d85499db47c93f129be8e0c4)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:40 +0000 (13:52 +0100)]
arm64: move kernel image to base of vmalloc area
This moves the module area to right before the vmalloc area, and moves
the kernel image to the base of the vmalloc area. This is an intermediate
step towards implementing KASLR, which allows the kernel image to be
located anywhere in the vmalloc area.
Since other subsystems such as hibernate may still need to refer to the
kernel text or data segments via their linears addresses, both are mapped
in the linear region as well. The linear alias of the text region is
mapped read-only/non-executable to prevent inadvertent modification or
execution.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
f9040773b7bbbd9e98eb6184a263512a7cfc133f)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:39 +0000 (13:52 +0100)]
arm64: kvm: deal with kernel symbols outside of linear mapping
KVM on arm64 uses a fixed offset between the linear mapping at EL1 and
the HYP mapping at EL2. Before we can move the kernel virtual mapping
out of the linear mapping, we have to make sure that references to kernel
symbols that are accessed via the HYP mapping are translated to their
linear equivalent.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
a0bf9776cd0be4490d4675d4108e13379849fc7f)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
skip new funcs create_hyp_mappings(__start_rodata,
in arch/arm/kvm/arm.c and keep funcs in arch/arm64/kvm/hyp.S
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:38 +0000 (13:52 +0100)]
arm64: decouple early fixmap init from linear mapping
Since the early fixmap page tables are populated using pages that are
part of the static footprint of the kernel, they are covered by the
initial kernel mapping, and we can refer to them without using __va/__pa
translations, which are tied to the linear mapping.
Since the fixmap page tables are disjoint from the kernel mapping up
to the top level pgd entry, we can refer to bm_pte[] directly, and there
is no need to walk the page tables and perform __pa()/__va() translations
at each step.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
157962f5a8f236cab898b68bdaa69ce68922f0bf)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:37 +0000 (13:52 +0100)]
arm64: pgtable: implement static [pte|pmd|pud]_offset variants
The page table accessors pte_offset(), pud_offset() and pmd_offset()
rely on __va translations, so they can only be used after the linear
mapping has been installed. For the early fixmap and kasan init routines,
whose page tables are allocated statically in the kernel image, these
functions will return bogus values. So implement pte_offset_kimg(),
pmd_offset_kimg() and pud_offset_kimg(), which can be used instead
before any page tables have been allocated dynamically.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
6533945a32c762c5db70d7a3ec251a040b2d9661)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:36 +0000 (13:52 +0100)]
arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region
This introduces the preprocessor symbol KIMAGE_VADDR which will serve as
the symbolic virtual base of the kernel region, i.e., the kernel's virtual
offset will be KIMAGE_VADDR + TEXT_OFFSET. For now, we define it as being
equal to PAGE_OFFSET, but in the future, it will be moved below it once
we move the kernel virtual mapping out of the linear mapping.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
ab893fb9f1b17f02139bce547bb4b69e96b9ae16)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:35 +0000 (13:52 +0100)]
arm64: add support for ioremap() block mappings
This wires up the existing generic huge-vmap feature, which allows
ioremap() to use PMD or PUD sized block mappings. It also adds support
to the unmap path for dealing with block mappings, which will allow us
to unmap the __init region using unmap_kernel_range() in a subsequent
patch.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
324420bf91f60582bb481133db9547111768ef17)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:34 +0000 (13:52 +0100)]
arm64: prevent potential circular header dependencies in asm/bug.h
Currently, using BUG_ON() in header files is cumbersome, due to the fact
that asm/bug.h transitively includes a lot of other header files, resulting
in the actual BUG_ON() invocation appearing before its definition in the
preprocessor input. So let's reverse the #include dependency between
asm/bug.h and asm/debug-monitors.h, by moving the definition of BUG_BRK_IMM
from the latter to the former. Also fix up one user of asm/debug-monitors.h
which relied on a transitive include.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
03336b1df9929e5d9c28fd9768948b6151cb046c)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
skip arch/arm64/kvm/hyp/debug-sr.c
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:33 +0000 (13:52 +0100)]
of/fdt: factor out assignment of initrd_start/initrd_end
Since architectures may not yet have their linear mapping up and running
when the initrd address is discovered from the DT, factor out the
assignment of initrd_start and initrd_end, so that an architecture can
override it and use the translation it needs.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
369bc9abf22bf026e8645a4dd746b90649a2f6ee)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:32 +0000 (13:52 +0100)]
of/fdt: make memblock minimum physical address arch configurable
By default, early_init_dt_add_memory_arch() ignores memory below
the base of the kernel image since it won't be addressable via the
linear mapping. However, this is not appropriate anymore once we
decouple the kernel text mapping from the linear mapping, so archs
may want to drop the low limit entirely. So allow the minimum to be
overridden by setting MIN_MEMBLOCK_ADDR.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
270522a04f7a9911983878fa37da467f9ff1c938)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Catalin Marinas [Thu, 18 Feb 2016 15:50:04 +0000 (15:50 +0000)]
arm64: Remove the get_thread_info() function
This function was introduced by previous commits implementing UAO.
However, it can be replaced with task_thread_info() in
uao_thread_switch() or get_fs() in do_page_fault() (the latter being
called only on the current context, so no need for using the saved
pt_regs).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
e950631e84e7e38892ffbeee5e1816b270026b0e)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
James Morse [Fri, 5 Feb 2016 14:58:50 +0000 (14:58 +0000)]
arm64: kernel: Don't toggle PAN on systems with UAO
If a CPU supports both Privileged Access Never (PAN) and User Access
Override (UAO), we don't need to disable/re-enable PAN round all
copy_to_user() like calls.
UAO alternatives cause these calls to use the 'unprivileged' load/store
instructions, which are overridden to be the privileged kind when
fs==KERNEL_DS.
This patch changes the copy_to_user() calls to have their PAN toggling
depend on a new composite 'feature' ARM64_ALT_PAN_NOT_UAO.
If both features are detected, PAN will be enabled, but the copy_to_user()
alternatives will not be applied. This means PAN will be enabled all the
time for these functions. If only PAN is detected, the toggling will be
enabled as normal.
This will save the time taken to disable/re-enable PAN, and allow us to
catch copy_to_user() accesses that occur with fs==KERNEL_DS.
Futex and swp-emulation code continue to hang their PAN toggling code on
ARM64_HAS_PAN.
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
705441960033e66b63524521f153fbb28c99ddbd)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
James Morse [Fri, 5 Feb 2016 14:58:49 +0000 (14:58 +0000)]
arm64: cpufeature: Test 'matches' pointer to find the end of the list
CPU feature code uses the desc field as a test to find the end of the list,
this means every entry must have a description. This generates noise for
entries in the list that aren't really features, but combinations of them.
e.g.
> CPU features: detected feature: Privileged Access Never
> CPU features: detected feature: PAN and not UAO
These combination features are needed for corner cases with alternatives,
where cpu features interact.
Change all walkers of the arm64_features[] and arm64_hwcaps[] lists to test
'matches' not 'desc', and only print 'desc' if it is non-NULL.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by : Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
644c2ae198412c956700e55a2acf80b2541f6aa5)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
James Morse [Fri, 5 Feb 2016 14:58:48 +0000 (14:58 +0000)]
arm64: kernel: Add support for User Access Override
'User Access Override' is a new ARMv8.2 feature which allows the
unprivileged load and store instructions to be overridden to behave in
the normal way.
This patch converts {get,put}_user() and friends to use ldtr*/sttr*
instructions - so that they can only access EL0 memory, then enables
UAO when fs==KERNEL_DS so that these functions can access kernel memory.
This allows user space's read/write permissions to be checked against the
page tables, instead of testing addr<USER_DS, then using the kernel's
read/write permissions.
Signed-off-by: James Morse <james.morse@arm.com>
[catalin.marinas@arm.com: move uao_thread_switch() above dsb()]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
57f4959bad0a154aeca125b7d38d1d9471a12422)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
James Morse [Fri, 5 Feb 2016 14:58:47 +0000 (14:58 +0000)]
arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
ARMv8.2 adds a new feature register id_aa64mmfr2. This patch adds the
cpu feature boiler plate used by the actual features in later patches.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
406e308770a92bd33995b2e5b681e86358328bb0)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
James Morse [Fri, 5 Feb 2016 14:58:46 +0000 (14:58 +0000)]
arm64: cpufeature: Change read_cpuid() to use sysreg's mrs_s macro
Older assemblers may not have support for newer feature registers. To get
round this, sysreg.h provides a 'mrs_s' macro that takes a register
encoding and generates the raw instruction.
Change read_cpuid() to use mrs_s in all cases so that new registers
don't have to be a special case. Including sysreg.h means we need to move
the include and definition of read_cpuid() after the #ifndef __ASSEMBLY__
to avoid syntax errors in vmlinux.lds.
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
0f54b14e76f5302afe164dc911b049b5df836ff5)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Ard Biesheuvel [Mon, 15 Feb 2016 08:51:49 +0000 (09:51 +0100)]
arm64: use local label prefixes for __reg_num symbols
The __reg_num_xNN symbols that are used to implement the msr_s and
mrs_s macros are recorded in the ELF metadata of each object file.
This does not affect the size of the final binary, but it does clutter
the output of tools like readelf, i.e.,
$ readelf -a vmlinux |grep -c __reg_num_x
50976
So let's use symbols with the .L prefix, these are strictly local,
and don't end up in the object files.
$ readelf -a vmlinux |grep -c __reg_num_x
0
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
7abc7d833c9eb16efc8a59239d3771a6e30be367)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
David Brown [Wed, 10 Feb 2016 21:52:22 +0000 (13:52 -0800)]
arm64: vdso: Mark vDSO code as read-only
Although the arm64 vDSO is cleanly separated by code/data with the
code being read-only in userspace mappings, the code page is still
writable from the kernel. There have been exploits (such as
http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
from a bad kernel write to full root.
Prevent this specific exploit on arm64 by putting the vDSO code page
in read-only memory as well.
Before the change:
[ 3.138366] vdso: 2 pages (1 code @
ffffffc000a71000, 1 data @
ffffffc000a70000)
---[ Kernel Mapping ]---
0xffffffc000000000-0xffffffc000082000 520K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000082000-0xffffffc000200000 1528K ro x SHD AF UXN MEM/NORMAL
0xffffffc000200000-0xffffffc000800000 6M ro x SHD AF BLK UXN MEM/NORMAL
0xffffffc000800000-0xffffffc0009b6000 1752K ro x SHD AF UXN MEM/NORMAL
0xffffffc0009b6000-0xffffffc000c00000 2344K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000c00000-0xffffffc008000000 116M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc00c000000-0xffffffc07f000000 1840M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc800000000-0xffffffc840000000 1G RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc840000000-0xffffffc87ae00000 942M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87ae00000-0xffffffc87ae70000 448K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af80000-0xffffffc87af8a000 40K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af8b000-0xffffffc87b000000 468K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87b000000-0xffffffc87fe00000 78M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87fe00000-0xffffffc87ff50000 1344K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87ff90000-0xffffffc87ffa0000 64K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87fff0000-0xffffffc880000000 64K RW NX SHD AF UXN MEM/NORMAL
After:
[ 3.138368] vdso: 2 pages (1 code @
ffffffc0006de000, 1 data @
ffffffc000a74000)
---[ Kernel Mapping ]---
0xffffffc000000000-0xffffffc000082000 520K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000082000-0xffffffc000200000 1528K ro x SHD AF UXN MEM/NORMAL
0xffffffc000200000-0xffffffc000800000 6M ro x SHD AF BLK UXN MEM/NORMAL
0xffffffc000800000-0xffffffc0009b8000 1760K ro x SHD AF UXN MEM/NORMAL
0xffffffc0009b8000-0xffffffc000c00000 2336K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000c00000-0xffffffc008000000 116M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc00c000000-0xffffffc07f000000 1840M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc800000000-0xffffffc840000000 1G RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc840000000-0xffffffc87ae00000 942M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87ae00000-0xffffffc87ae70000 448K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af80000-0xffffffc87af8a000 40K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af8b000-0xffffffc87b000000 468K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87b000000-0xffffffc87fe00000 78M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87fe00000-0xffffffc87ff50000 1344K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87ff90000-0xffffffc87ffa0000 64K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87fff0000-0xffffffc880000000 64K RW NX SHD AF UXN MEM/NORMAL
Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
PaX Team, Brad Spengler, and Kees Cook.
Signed-off-by: David Brown <david.brown@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[catalin.marinas@arm.com: removed superfluous __PAGE_ALIGNED_DATA]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
88d8a7994e564d209d4b2583496631c2357d386b)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Yang Shi [Fri, 5 Feb 2016 23:50:18 +0000 (15:50 -0800)]
arm64: ubsan: select ARCH_HAS_UBSAN_SANITIZE_ALL
To enable UBSAN on arm64, ARCH_HAS_UBSAN_SANITIZE_ALL need to be selected.
Basic kernel bootup test is passed on arm64 with CONFIG_UBSAN_SANITIZE_ALL
enabled.
Signed-off-by: Yang Shi <yang.shi@linaro.org>
Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
f0b7f8a4b44657386273a67179dd901c81cd11a6)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Laura Abbott [Sat, 6 Feb 2016 00:24:48 +0000 (16:24 -0800)]
arm64: ptdump: Indicate whether memory should be faulting
With CONFIG_DEBUG_PAGEALLOC, pages do not have the valid bit
set when free in the buddy allocator. Add an indiciation to
the page table dumping code that the valid bit is not set,
'F' for fault, to make this easier to understand.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
d7e9d59494a9a5d83274f5af2148b82ca22dff3f)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Laura Abbott [Sat, 6 Feb 2016 00:24:47 +0000 (16:24 -0800)]
arm64: Add support for ARCH_SUPPORTS_DEBUG_PAGEALLOC
ARCH_SUPPORTS_DEBUG_PAGEALLOC provides a hook to map and unmap
pages for debugging purposes. This requires memory be mapped
with PAGE_SIZE mappings since breaking down larger mappings
at runtime will lead to TLB conflicts. Check if debug_pagealloc
is enabled at runtime and if so, map everyting with PAGE_SIZE
pages. Implement the functions to actually map/unmap the
pages at runtime.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
[catalin.marinas@arm.com: static annotation block_mappings_allowed() and #ifdef]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
83863f25e4b8214e994ef8b5647aad614d74b45d)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Laura Abbott [Sat, 6 Feb 2016 00:24:46 +0000 (16:24 -0800)]
arm64: Drop alloc function from create_mapping
create_mapping is only used in fixmap_remap_fdt. All the create_mapping
calls need to happen on existing translation table pages without
additional allocations. Rather than have an alloc function be called
and fail, just set it to NULL and catch its use. Also change
the name to create_mapping_noalloc to better capture what exactly is
going on.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
132233a759580f5ce9b1bfaac9073e47d03c460d)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Wed, 10 Feb 2016 10:07:30 +0000 (10:07 +0000)]
arm64: prefetch: add missing #include for spin_lock_prefetch
As of
52e662326e1e ("arm64: prefetch: don't provide spin_lock_prefetch
with LSE"), spin_lock_prefetch is patched at runtime when the LSE atomics
are in use. This relies on the ARM64_LSE_ATOMIC_INSN macro to drive
the alternatives framework, but that macro is only available via
asm/lse.h, which isn't explicitly included in processor.h. Consequently,
drivers can run into build failures such as:
In file included from include/linux/prefetch.h:14:0,
from drivers/net/ethernet/intel/i40e/i40e_txrx.c:27:
arch/arm64/include/asm/processor.h: In function 'spin_lock_prefetch':
arch/arm64/include/asm/processor.h:183:15: error: expected string literal before 'ARM64_LSE_ATOMIC_INSN'
asm volatile(ARM64_LSE_ATOMIC_INSN(
This patch add the missing include and gets things building again.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
afb83cc3f0e4f86ea0e1cc3db7a90f58f1abd4d5)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Andrew Pinski [Tue, 2 Feb 2016 12:46:26 +0000 (12:46 +0000)]
arm64: lib: patch in prfm for copy_page if requested
On ThunderX T88 pass 1 and pass 2, there is no hardware prefetching so
we need to patch in explicit software prefetching instructions
Prefetching improves this code by 60% over the original code and 2x
over the code without prefetching for the affected hardware using the
benchmark code at https://github.com/apinski-cavium/copy_page_benchmark
Signed-off-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Tested-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
60e0a09db24adc8809696307e5d97cc4ba7cb3e0)
Signed-off-by: Alex Shi <alex.shi@linaro.org>