firefly-linux-kernel-4.4.55.git
7 years agoANDROID: sdcardfs: Fix gid issue
Daniel Rosenberg [Mon, 13 Mar 2017 22:34:03 +0000 (15:34 -0700)]
ANDROID: sdcardfs: Fix gid issue

We were already calculating most of these values,
and erroring out because the check was confused by this.
Instead of recalculating, adjust it as needed.

Signed-off-by: Daniel Rosenberg <drosen@google.com>
Bug: 36160015
Change-Id: I9caf3e2fd32ca2e37ff8ed71b1d392f1761bc9a9

7 years agoANDROID: sdcardfs: Use tabs instead of spaces in multiuser.h
Daniel Rosenberg [Mon, 13 Mar 2017 20:53:54 +0000 (13:53 -0700)]
ANDROID: sdcardfs: Use tabs instead of spaces in multiuser.h

Signed-off-by: Daniel Rosenberg <drosen@google.com>
Bug: 35331000
Change-Id: Ic7801914a7dd377e270647f81070020e1f0bab9b

7 years agoANDROID: sdcardfs: Remove uninformative prints
Daniel Rosenberg [Sat, 11 Mar 2017 02:58:25 +0000 (18:58 -0800)]
ANDROID: sdcardfs: Remove uninformative prints

At best these prints do not provide useful information, and
at worst, some allow userspace to abuse the kernel log.

Signed-off-by: Daniel Rosenberg <drosen@google.com>
Bug: 36138424
Change-Id: I812c57cc6a22b37262935ab77f48f3af4c36827e

7 years agoANDROID: sdcardfs: move path_put outside of spinlock
Daniel Rosenberg [Fri, 10 Mar 2017 21:54:30 +0000 (13:54 -0800)]
ANDROID: sdcardfs: move path_put outside of spinlock

Signed-off-by: Daniel Rosenberg <drosen@google.com>
Bug: 35643557
Change-Id: Ib279ebd7dd4e5884d184d67696a93e34993bc1ef

7 years agoANDROID: sdcardfs: Use case insensitive hash function
Daniel Rosenberg [Fri, 10 Mar 2017 20:39:42 +0000 (12:39 -0800)]
ANDROID: sdcardfs: Use case insensitive hash function

Case insensitive comparisons don't help us much if
we hash to different buckets...

Signed-off-by: Daniel Rosenberg <drosen@google.com>
bug: 36004503
Change-Id: I91e00dbcd860a709cbd4f7fd7fc6d855779f3285

7 years agoANDROID: sdcardfs: declare MODULE_ALIAS_FS
Daniel Rosenberg [Fri, 10 Mar 2017 04:59:18 +0000 (20:59 -0800)]
ANDROID: sdcardfs: declare MODULE_ALIAS_FS

From commit ee616b78aa87 ("Wrapfs: declare MODULE_ALIAS_FS")

Signed-off-by: Daniel Rosenberg <drosen@google.com>
bug: 35766959
Change-Id: Ia4728ab49d065b1d2eb27825046f14b97c328cba

7 years agoANDROID: sdcardfs: Get the blocksize from the lower fs
Daniel Rosenberg [Fri, 10 Mar 2017 02:12:16 +0000 (18:12 -0800)]
ANDROID: sdcardfs: Get the blocksize from the lower fs

This changes sdcardfs to be more in line with the
getattr in wrapfs, which calls the lower fs's getattr
to get the block size

Signed-off-by: Daniel Rosenberg <drosen@google.com>
Bug: 34723223
Change-Id: I1c9e16604ba580a8cdefa17f02dcc489d7351aed

7 years agoANDROID: sdcardfs: Use d_invalidate instead of drop_recurisve
Daniel Rosenberg [Thu, 9 Mar 2017 01:45:46 +0000 (17:45 -0800)]
ANDROID: sdcardfs: Use d_invalidate instead of drop_recurisve

drop_recursive did not properly remove stale dentries.
Instead, we use the vfs's d_invalidate, which does the proper cleanup.

Additionally, remove the no longer used drop_recursive, and
fixup_top_recursive that that are no longer used.

Signed-off-by: Daniel Rosenberg <drosen@google.com>
Change-Id: Ibff61b0c34b725b024a050169047a415bc90f0d8

7 years agoANDROID: sdcardfs: Switch to internal case insensitive compare
Daniel Rosenberg [Thu, 9 Mar 2017 01:20:02 +0000 (17:20 -0800)]
ANDROID: sdcardfs: Switch to internal case insensitive compare

There were still a few places where we called into a case
insensitive lookup that was not defined by sdcardfs.
Moving them all to the same place will allow us to switch
the implementation in the future.

Additionally, the check in fixup_perms_recursive did not
take into account the length of both strings, causing
extraneous matches when the name we were looking for was
a prefix of the child name.

Signed-off-by: Daniel Rosenberg <drosen@google.com>
Change-Id: I45ce768cd782cb4ea1ae183772781387c590ecc2

7 years agoANDROID: sdcardfs: Use spin_lock_nested
Daniel Rosenberg [Thu, 9 Mar 2017 01:11:51 +0000 (17:11 -0800)]
ANDROID: sdcardfs: Use spin_lock_nested

Signed-off-by: Daniel Rosenberg <drosen@google.com>
Bug: 36007653
Change-Id: I805d5afec797669679853fb2bb993ee38e6276e4

7 years agoANDROID: sdcardfs: Replace get/put with d_lock
Daniel Rosenberg [Thu, 2 Mar 2017 23:11:27 +0000 (15:11 -0800)]
ANDROID: sdcardfs: Replace get/put with d_lock

dput cannot be called with a spin_lock. Instead,
we protect our accesses by holding the d_lock.

Signed-off-by: Daniel Rosenberg <drosen@google.com>
Bug: 35643557
Change-Id: I22cf30856d75b5616cbb0c223724f5ab866b5114

7 years agoANDROID: sdcardfs: rate limit warning print
Daniel Rosenberg [Fri, 3 Mar 2017 02:07:21 +0000 (18:07 -0800)]
ANDROID: sdcardfs: rate limit warning print

Signed-off-by: Daniel Rosenberg <drosen@google.com>
Bug: 35848445
Change-Id: Ida72ea0ece191b2ae4a8babae096b2451eb563f6

7 years agoANDROID: sdcardfs: Fix case insensitive lookup
Daniel Rosenberg [Thu, 2 Mar 2017 01:04:41 +0000 (17:04 -0800)]
ANDROID: sdcardfs: Fix case insensitive lookup

The previous case insensitive lookup relied on the
entry being present in the dcache. This instead uses
iterate_dir to find the correct case.

Signed-off-by: Daniel Rosenberg <drosen@google.com>
bug: 35633782
Change-Id: I556f7090773468c1943c89a5e2aa07f746ba49c5

7 years agoANDROID: uid_sys_stats: account for fsync syscalls
Jin Qian [Thu, 2 Mar 2017 21:39:43 +0000 (13:39 -0800)]
ANDROID: uid_sys_stats: account for fsync syscalls

Change-Id: Ie888d8a0f4ec7a27dea86dc4afba8e6fd4203488
Signed-off-by: Jin Qian <jinqian@google.com>
7 years agoANDROID: sched: add a counter to track fsync
Jin Qian [Thu, 2 Mar 2017 21:32:59 +0000 (13:32 -0800)]
ANDROID: sched: add a counter to track fsync

Change-Id: I6c138de5b2332eea70f57e098134d1d141247b3f
Signed-off-by: Jin Qian <jinqian@google.com>
7 years agoANDROID: uid_sys_stats: fix negative write bytes.
Jin Qian [Tue, 28 Feb 2017 23:09:42 +0000 (15:09 -0800)]
ANDROID: uid_sys_stats: fix negative write bytes.

A task can cancel writes made by other tasks. In rare cases,
cancelled_write_bytes is larger than write_bytes if the task
itself didn't make any write. This doesn't affect total size
but may cause confusion when looking at IO usage on individual
tasks.

Bug: 35851986
Change-Id: If6cb549aeef9e248e18d804293401bb2b91918ca
Signed-off-by: Jin Qian <jinqian@google.com>
7 years agoANDROID: uid_sys_stats: allow writing same state
Jin Qian [Wed, 18 Jan 2017 01:26:07 +0000 (17:26 -0800)]
ANDROID: uid_sys_stats: allow writing same state

Signed-off-by: Jin Qian <jinqian@google.com>
Bug: 34360629
Change-Id: Ia748351e07910b1febe54f0484ca1be58c4eb9c7

7 years agoANDROID: uid_sys_stats: rename uid_cputime.c to uid_sys_stats.c
Jin Qian [Wed, 11 Jan 2017 00:11:07 +0000 (16:11 -0800)]
ANDROID: uid_sys_stats: rename uid_cputime.c to uid_sys_stats.c

This module tracks cputime and io stats.

Signed-off-by: Jin Qian <jinqian@google.com>
Bug: 34198239
Change-Id: I9ee7d9e915431e0bb714b36b5a2282e1fdcc7342

7 years agoANDROID: uid_cputime: add per-uid IO usage accounting
Jin Qian [Wed, 11 Jan 2017 00:10:35 +0000 (16:10 -0800)]
ANDROID: uid_cputime: add per-uid IO usage accounting

IO usages are accounted in foreground and background buckets.
For each uid, io usage is calculated in two steps.

delta = current total of all uid tasks - previus total
current bucket += delta

Bucket is determined by current uid stat. Userspace writes to
/proc/uid_procstat/set <uid> <stat> when uid stat is updated.

/proc/uid_io/stats shows IO usage in this format.
<uid> <foreground IO> <background IO>

Signed-off-by: Jin Qian <jinqian@google.com>
Bug: 34198239
Change-Id: Ib8bebda53e7a56f45ea3eb0ec9a3153d44188102

7 years agoDTB: Add EAS compatible Juno Energy model to 'juno.dts'
Chris Redpath [Fri, 13 Nov 2015 10:21:39 +0000 (10:21 +0000)]
DTB: Add EAS compatible Juno Energy model to 'juno.dts'

EAS expects the energy model for the CPUs and cluster states to be
available in the DTB. The energy model data comes from previous versions.

Change-Id: I87535c8d802797361333929d809b43383bc8954b
(cherry picked from commit bf137f205f312a1814ae38f908ec7bdbdddeaa3e (LSK 4.4))
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
7 years agoarm64: dts: juno: Add idle-states to device tree
Jon Medhurst (Tixy) [Wed, 9 Dec 2015 09:40:53 +0000 (09:40 +0000)]
arm64: dts: juno: Add idle-states to device tree

This patch adds idle-states bindings data collected through a set of
benchmarking experiments (latency and energy consumption) on Juno
boards. Latencies data represents the worst case scenarios as required
by the DT idle-states bindings.

Change-Id: I7b2d81fa66f8ce8b229457cfefff06e9edd545c7
(cherry picked from commit 286896f43b0248960f69660159b507b23751b38a)
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
7 years agoANDROID: Replace spaces by '_' for some android filesystem tracepoints.
Mohan Srinivasan [Sat, 11 Mar 2017 00:08:30 +0000 (16:08 -0800)]
ANDROID: Replace spaces by '_' for some android filesystem tracepoints.

Andoid files frequently have spaces in them, as do cmdline strings.
Replace these spaces with '_', so tools that parse these tracepoints
don't get terribly confused.

Change-Id: I1cbbedf5c803aa6a58d9b8b7836e9125683c49d1
Signed-off-by: Mohan Srinivasan <srmohan@google.com>
(cherry picked from commit 5035d5f0933758dd515327d038e5bef7e40dbaa7)

7 years agousb: gadget: f_accessory: Fix for UsbAccessory clean unbind.
Anson Jacob [Sat, 13 Aug 2016 00:38:10 +0000 (20:38 -0400)]
usb: gadget: f_accessory: Fix for UsbAccessory clean unbind.

Reapplying fix by Darren Whobrey (Change 69674)

Fixes issues: 20545, 59667 and 61390.
With prior version of f_accessory.c, UsbAccessories would not
unbind cleanly when application is closed or i/o stopped
while the usb cable is still connected. The accessory gadget
driver would be left in an invalid state which was not reset
on subsequent binding or opening. A reboot was necessary to clear.

In some phones this issues causes the phone to reboot upon
unplugging the USB cable.

Main problem was that acc_disconnect was being called on I/O error
which reset disconnected and online.

Minor fix required to properly track setting and unsetting of
disconnected and online flags. Also added urb Q wakeup's on unbind
to help unblock waiting threads.

Tested on Nexus 7 grouper. Expected behaviour now observed:
closing accessory causes blocked i/o to interrupt with IOException.
Accessory can be restarted following closing of file handle
and re-opening.

This is a generic fix that applies to all devices.

Change-Id: I4e08b326730dd3a2820c863124cee10f7cb5501e
Signed-off-by: Darren Whobrey <d.whobrey@mildai.org>
Signed-off-by: Anson Jacob <ansonjacob.aj@gmail.com>
7 years agoandroid: binder: move global binder state into context struct.
Martijn Coenen [Fri, 30 Sep 2016 14:40:04 +0000 (16:40 +0200)]
android: binder: move global binder state into context struct.

This change moves all global binder state into
the context struct, thereby completely separating
the state and the locks between two different contexts.

The debugfs entries remain global, printing entries
from all contexts.

Change-Id: If8e3e2bece7bc6f974b66fbcf1d91d529ffa62f0
Signed-off-by: Martijn Coenen <maco@google.com>
7 years agoandroid: binder: add padding to binder_fd_array_object.
Martijn Coenen [Tue, 7 Mar 2017 14:54:56 +0000 (15:54 +0100)]
android: binder: add padding to binder_fd_array_object.

binder_fd_array_object starts with a 4-byte header,
followed by a few fields that are 8 bytes when
ANDROID_BINDER_IPC_32BIT=N.

This can cause alignment issues in a 64-bit kernel
with a 32-bit userspace, as on x86_32 an 8-byte primitive
may be aligned to a 4-byte address. Pad with a __u32
to fix this.

Change-Id: I4374ed2cc3ccd3c6a1474cb7209b53ebfd91077b
Signed-off-by: Martijn Coenen <maco@android.com>
7 years agobinder: use group leader instead of open thread
Martijn Coenen [Tue, 7 Mar 2017 14:51:18 +0000 (15:51 +0100)]
binder: use group leader instead of open thread

The binder allocator assumes that the thread that
called binder_open will never die for the lifetime of
that proc. That thread is normally the group_leader,
however it may not be. Use the group_leader instead
of current.

Bug: 35707103
Test: Created test case to open with temporary thread

Change-Id: Id693f74b3591f3524a8c6e9508e70f3e5a80c588
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Martijn Coenen <maco@android.com>
7 years agonf: IDLETIMER: Use fullsock when querying uid
Subash Abhinov Kasiviswanathan [Wed, 2 Nov 2016 17:56:40 +0000 (11:56 -0600)]
nf: IDLETIMER: Use fullsock when querying uid

sock_i_uid() acquires the sk_callback_lock which does not exist for
sockets in TCP_NEW_SYN_RECV state. This results in errors showing up
as spinlock bad magic. Fix this by looking for the full sock as
suggested by Eric.

Callstack for reference -

-003|rwlock_bug
-004|arch_read_lock
-004|do_raw_read_lock
-005|raw_read_lock_bh
-006|sock_i_uid
-007|from_kuid_munged(inline)
-007|reset_timer
-008|idletimer_tg_target
-009|ipt_do_table
-010|iptable_mangle_hook
-011|nf_iterate
-012|nf_hook_slow
-013|NF_HOOK_COND(inline)
-013|ip_output
-014|ip_local_out
-015|ip_build_and_send_pkt
-016|tcp_v4_send_synack
-017|atomic_sub_return(inline)
-017|reqsk_put(inline)
-017|tcp_conn_request
-018|tcp_v4_conn_request
-019|tcp_rcv_state_process
-020|tcp_v4_do_rcv
-021|tcp_v4_rcv
-022|ip_local_deliver_finish
-023|NF_HOOK_THRESH(inline)
-023|NF_HOOK(inline)
-023|ip_local_deliver
-024|ip_rcv_finish
-025|NF_HOOK_THRESH(inline)
-025|NF_HOOK(inline)
-025|ip_rcv
-026|deliver_skb(inline)
-026|deliver_ptype_list_skb(inline)
-026|__netif_receive_skb_core
-027|__netif_receive_skb
-028|netif_receive_skb_internal
-029|netif_receive_skb

Change-Id: Ic8f3a3d2d7af31434d1163b03971994e2125d552
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Cc: Eric Dumazet <edumazet@google.com>
7 years agonf: IDLETIMER: Fix use after free condition during work
Subash Abhinov Kasiviswanathan [Fri, 11 Nov 2016 02:36:15 +0000 (19:36 -0700)]
nf: IDLETIMER: Fix use after free condition during work

schedule_work(&timer->work) appears to be called after
cancel_work_sync(&info->timer->work) is completed.
Work can be scheduled from the PM_POST_SUSPEND notification event
even after cancel_work_sync is called.

Call stack

-004|notify_netlink_uevent(
    |    [X19] timer = 0xFFFFFFC0A5DFC780 -> (
    |      ...
    |      [NSD:0xFFFFFFC0A5DFC800] kobj = 0x6B6B6B6B6B6B6B6B,
    |      [NSD:0xFFFFFFC0A5DFC868] timeout = 0x6B6B6B6B,
    |      [NSD:0xFFFFFFC0A5DFC86C] refcnt = 0x6B6B6B6B,
    |      [NSD:0xFFFFFFC0A5DFC870] work_pending = 0x6B,
    |      [NSD:0xFFFFFFC0A5DFC871] send_nl_msg = 0x6B,
    |      [NSD:0xFFFFFFC0A5DFC872] active = 0x6B,
    |      [NSD:0xFFFFFFC0A5DFC874] uid = 0x6B6B6B6B,
    |      [NSD:0xFFFFFFC0A5DFC878] suspend_time_valid = 0x6B))
-005|idletimer_tg_work(
-006|__read_once_size(inline)
-006|static_key_count(inline)
-006|static_key_false(inline)
-006|trace_workqueue_execute_end(inline)
-006|process_one_work(
-007|worker_thread(
-008|kthread(
-009|ret_from_fork(asm)
---|end of frame

Force any pending idletimer_tg_work() to complete before freeing
the associated work struct and after unregistering to the pm_notifier
callback.

Change-Id: I4c5f0a1c142f7d698c092cf7bcafdb0f9fbaa9c1
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
7 years agoANDROID: dm: android-verity: fix table_make_digest() error handling
Greg Hackmann [Mon, 14 Nov 2016 17:48:02 +0000 (09:48 -0800)]
ANDROID: dm: android-verity: fix table_make_digest() error handling

If table_make_digest() fails, verify_verity_signature() would try to
pass the returned ERR_PTR() to kfree().

This fixes the smatch error:

drivers/md/dm-android-verity.c:601 verify_verity_signature() error: 'pks' dereferencing possible ERR_PTR()

Change-Id: I9b9b7764b538cb4a5f94337660e9b0f149b139be
Signed-off-by: Greg Hackmann <ghackmann@google.com>
7 years agoANDROID: usb: gadget: function: Fix commenting style
Anson Jacob [Thu, 17 Nov 2016 07:32:40 +0000 (02:32 -0500)]
ANDROID: usb: gadget: function: Fix commenting style

Fix checkpatch.pl warning:
Block comments use * on subsequent lines

Change-Id: I9c92f128fdb3aeeb6ab9c7039e11f857bebb9539
Signed-off-by: Anson Jacob <ansonjacob.aj@gmail.com>
7 years agocpufreq: interactive governor drops bits in time calculation
Chris Redpath [Mon, 17 Jun 2013 17:36:56 +0000 (18:36 +0100)]
cpufreq: interactive governor drops bits in time calculation

Keep time calculation in 64-bit throughout. If we have long times
between idle calculations this can result in deltas > 32 bits
which causes incorrect load percentage calculations and selecting
the wrong frequencies if we truncate here.

Signed-off-by: Chris Redpath <chris.redpath@arm.com>
7 years agoANDROID: sdcardfs: support direct-IO (DIO) operations
Daniel Rosenberg [Fri, 24 Feb 2017 23:49:45 +0000 (15:49 -0800)]
ANDROID: sdcardfs: support direct-IO (DIO) operations

This comes from the wrapfs patch
2e346c83b26e Wrapfs: support direct-IO (DIO) operations

Signed-off-by: Li Mengyang <li.mengyang@stonybrook.edu>
Signed-off-by: Erez Zadok <ezk@cs.sunysb.edu>
Signed-off-by: Daniel Rosenberg <drosen@google.com>
Bug: 34133558
Change-Id: I3fd779c510ab70d56b1d918f99c20421b524cdc4

7 years agoANDROID: sdcardfs: implement vm_ops->page_mkwrite
Daniel Rosenberg [Fri, 24 Feb 2017 23:41:48 +0000 (15:41 -0800)]
ANDROID: sdcardfs: implement vm_ops->page_mkwrite

This comes from the wrapfs patch
3dfec0ffe5e2 Wrapfs: implement vm_ops->page_mkwrite

Some file systems (e.g., ext4) require it.  Reported by Ted Ts'o.

Signed-off-by: Erez Zadok <ezk@cs.sunysb.edu>
Signed-off-by: Daniel Rosenberg <drosen@google.com>
Bug: 34133558
Change-Id: I1a389b2422c654a6d3046bb8ec3e20511aebfa8e

7 years agoANDROID: sdcardfs: Don't bother deleting freelist
Daniel Rosenberg [Wed, 22 Feb 2017 22:41:58 +0000 (14:41 -0800)]
ANDROID: sdcardfs: Don't bother deleting freelist

There is no point deleting entries from dlist, as
that is a temporary list on the stack from which
contains only entries that are being deleted.

Not all code paths set up dlist, so those that
don't were performing invalid accesses in
hash_del_rcu. As an additional means to prevent
any other issue, we null out the list entries when
we allocate from the cache.

Signed-off-by: Daniel Rosenberg <drosen@google.com>
Bug: 35666680
Change-Id: Ibb1e28c08c3a600c29418d39ba1c0f3db3bf31e5

7 years agoANDROID: sdcardfs: Add missing path_put
Daniel Rosenberg [Fri, 17 Feb 2017 01:55:22 +0000 (17:55 -0800)]
ANDROID: sdcardfs: Add missing path_put

"ANDROID: sdcardfs: Add GID Derivation to sdcardfs" introduced
an unbalanced pat_get, leading to storage space not being freed
after deleting a file until rebooting. This adds the missing path_put.

Signed-off-by: Daniel Rosenberg <drosen@google.com>
Bug: 34691169
Change-Id: Ia7ef97ec2eca2c555cc06b235715635afc87940e

7 years agoANDROID: sdcardfs: Fix incorrect hash
Daniel Rosenberg [Wed, 15 Feb 2017 04:47:17 +0000 (20:47 -0800)]
ANDROID: sdcardfs: Fix incorrect hash

This adds back the hash calculation removed as part of
the previous patch, as it is in fact necessary.

Signed-off-by: Daniel Rosenberg <drosen@google.com>
Bug: 35307857
Change-Id: Ie607332bcf2c5d2efdf924e4060ef3f576bf25dc

7 years agoANDROID: ext4: add a non-reversible key derivation method
Eric Biggers [Wed, 11 Jan 2017 01:02:40 +0000 (17:02 -0800)]
ANDROID: ext4: add a non-reversible key derivation method

Add a new per-file key derivation method to ext4 encryption defined as:

    derived_key[0:127]   = AES-256-ENCRYPT(master_key[0:255], nonce)
    derived_key[128:255] = AES-256-ENCRYPT(master_key[0:255], nonce ^ 0x01)
    derived_key[256:383] = AES-256-ENCRYPT(master_key[256:511], nonce)
    derived_key[384:511] = AES-256-ENCRYPT(master_key[256:511], nonce ^ 0x01)

... where the derived key and master key are both 512 bits, the nonce is
128 bits, AES-256-ENCRYPT takes the arguments (key, plaintext), and
'nonce ^ 0x01' denotes flipping the low order bit of the last byte.

The existing key derivation method is
'derived_key = AES-128-ECB-ENCRYPT(key=nonce, plaintext=master_key)'.
We want to make this change because currently, given a derived key you
can easily compute the master key by computing
'AES-128-ECB-DECRYPT(key=nonce, ciphertext=derived_key)'.

This was formerly OK because the previous threat model assumed that the
master key and derived keys are equally hard to obtain by an attacker.
However, we are looking to move the master key into secure hardware in
some cases, so we want to make sure that an attacker with access to a
derived key cannot compute the master key.

We are doing this instead of increasing the nonce to 512 bits because
it's important that the per-file xattr fit in the inode itself.  By
default, inodes are 256 bytes, and on Android we're already pretty close
to that limit. If we increase the nonce size, we end up allocating a new
filesystem block for each and every encrypted file, which has a
substantial performance and disk utilization impact.

Another option considered was to use the HMAC-SHA512 of the nonce, keyed
by the master key.  However this would be a little less performant,
would be less extensible to other key sizes and MAC algorithms, and
would pull in a dependency (security-wise and code-wise) on SHA-512.

Due to the use of "aes" rather than "ecb(aes)" in the implementation,
the new key derivation method is actually about twice as fast as the old
one, though the old one could be optimized similarly as well.

This patch makes the new key derivation method be used whenever HEH is
used to encrypt filenames.  Although these two features are logically
independent, it was decided to bundle them together for now.  Note that
neither feature is upstream yet, and it cannot be guaranteed that the
on-disk format won't change if/when these features are upstreamed.  For
this reason, and as noted in the previous patch, the features are both
behind a special mode number for now.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Change-Id: Iee4113f57e59dc8c0b7dc5238d7003c83defb986

7 years agoANDROID: ext4: allow encrypting filenames using HEH algorithm
Eric Biggers [Wed, 11 Jan 2017 01:02:39 +0000 (17:02 -0800)]
ANDROID: ext4: allow encrypting filenames using HEH algorithm

Update ext4 encryption to allow filenames to be encrypted using the
Hash-Encrypt-Hash (HEH) block cipher mode of operation, which is
believed to be more secure than CBC, particularly within the constant
initialization vector (IV) constraint of filename encryption.  Notably,
HEH avoids the "common prefix" problem of CBC.  Both algorithms use
AES-256 as the underlying block cipher and take a 256-bit key.

We assign mode number 126 to HEH, just below 127
(EXT4_ENCRYPTION_MODE_PRIVATE) which in some kernels is reserved for
inline encryption on MSM chipsets.  Note that these modes are not yet
upstream, which is why these numbers are being used; it's preferable to
avoid collisions with modes that may be added upstream.  Also, although
HEH is not hardware-specific, we aren't currently reserving mode number
5 for HEH upstream, since for now we are tying HEH to the new key
derivation method which might become an independent flag upstream, and
there's also a chance that details of HEH will change after it gets
wider review.

Bug: 32975945
Signed-off-by: Eric Biggers <ebiggers@google.com>
Change-Id: I81418709d47da0e0ac607ae3f91088063c2d5dd4

7 years agoANDROID: arm64/crypto: add ARMv8-CE optimized poly_hash algorithm
Eric Biggers [Wed, 11 Jan 2017 02:32:19 +0000 (18:32 -0800)]
ANDROID: arm64/crypto: add ARMv8-CE optimized poly_hash algorithm

poly_hash is part of the HEH (Hash-Encrypt-Hash) encryption mode,
proposed in Internet Draft
https://tools.ietf.org/html/draft-cope-heh-01.  poly_hash is very
similar to GHASH; besides the swapping of the last two coefficients
which we opted to handle in the HEH template, poly_hash just uses a
different finite field representation.  As with GHASH, poly_hash becomes
much faster and more secure against timing attacks when implemented
using carryless multiplication instructions instead of tables.  This
patch adds an ARMv8-CE optimized version of poly_hash, based roughly on
the existing ARMv8-CE optimized version of GHASH.

Benchmark results are shown below, but note that the resistance to
timing attacks may be even more important than the performance gain.

poly_hash only:

    poly_hash-generic:
        1,000,000 setkey() takes 1185 ms
        hashing is 328 MB/s

    poly_hash-ce:
        1,000,000 setkey() takes 8 ms
        hashing is 1756 MB/s

heh(aes) with 4096-byte inputs (this is the ideal case, as the
improvement is less significant with smaller inputs):

    encryption with "heh_base(cmac(aes-ce),poly_hash-generic,ecb-aes-ce)": 118 MB/s
    decryption with "heh_base(cmac(aes-ce),poly_hash-generic,ecb-aes-ce)": 120 MB/s

    encryption with "heh_base(cmac(aes-ce),poly_hash-ce,ecb-aes-ce)": 291 MB/s
    decryption with "heh_base(cmac(aes-ce),poly_hash-ce,ecb-aes-ce)": 293 MB/s

Bug: 32508661
Signed-off-by: Eric Biggers <ebiggers@google.com>
Change-Id: I621ec0e1115df7e6f5cbd7e864a4a9d8d2e94cf2

7 years agoANDROID: crypto: heh - factor out poly_hash algorithm
Eric Biggers [Wed, 11 Jan 2017 18:36:41 +0000 (10:36 -0800)]
ANDROID: crypto: heh - factor out poly_hash algorithm

Factor most of poly_hash() out into its own keyed hash algorithm so that
optimized architecture-specific implementations of it will be possible.

For now we call poly_hash through the shash API, since HEH already had
an example of using shash for another algorithm (CMAC), and we will not
be adding any poly_hash implementations that require ahash yet.  We can
however switch to ahash later if it becomes useful.

Bug: 32508661
Signed-off-by: Eric Biggers <ebiggers@google.com>
Change-Id: I8de54ddcecd1d7fa6e9842a09506a08129bae0b6

7 years agoANDROID: crypto: heh - Add Hash-Encrypt-Hash (HEH) algorithm
Alex Cope [Wed, 11 Jan 2017 00:47:49 +0000 (16:47 -0800)]
ANDROID: crypto: heh - Add Hash-Encrypt-Hash (HEH) algorithm

Hash-Encrypt-Hash (HEH) is a proposed block cipher mode of operation
which extends the strong pseudo-random permutation property of block
ciphers (e.g. AES) to arbitrary length input strings.  This provides a
stronger notion of security than existing block cipher modes of
operation (e.g. CBC, CTR, XTS), though it is usually less performant.
It uses two keyed invertible hash functions with a layer of ECB
encryption applied in-between.  The algorithm is currently specified by
the following Internet Draft:

https://tools.ietf.org/html/draft-cope-heh-01

This patch adds HEH as a symmetric cipher only.  Support for HEH as an
AEAD is not yet implemented.

HEH will use an existing accelerated ecb(block_cipher) implementation
for the encrypt step if available.  Accelerated versions of the hash
step are planned but will be left for later patches.

This patch backports HEH to the 4.4 Android kernel, initially for use by
ext4 filenames encryption.  Note that HEH is not yet upstream; however,
patches have been made available on linux-crypto, and as noted there is
also a draft specification available.  This backport required updating
the code to conform to the legacy ablkcipher API rather than the
skcipher API, which wasn't complete in 4.4.

Signed-off-by: Alex Cope <alexcope@google.com>
Bug: 32975945
Signed-off-by: Eric Biggers <ebiggers@google.com>
Change-Id: I945bcc9c0115916824d701bae91b86e3f059a1a9

7 years agoANDROID: crypto: gf128mul - Add ble multiplication functions
Alex Cope [Wed, 11 Jan 2017 00:47:49 +0000 (16:47 -0800)]
ANDROID: crypto: gf128mul - Add ble multiplication functions

Adding ble multiplication to GF128mul, and fixing up comments.

The ble multiplication functions multiply GF(2^128) elements in the
ble format. This format is preferable because the bits within each
byte map to polynomial coefficients in the natural order (lowest order
bit = coefficient of lowest degree polynomial term), and the bytes are
stored in little endian order which matches the endianness of most
modern CPUs.

These new functions will be used by the HEH algorithm.

Signed-off-by: Alex Cope <alexcope@google.com>
Bug: 32975945
Signed-off-by: Eric Biggers <ebiggers@google.com>
Change-Id: I39a58e8ee83e6f9b2e6bd51738f816dbfa2f3a47

7 years agoANDROID: crypto: gf128mul - Refactor gf128 overflow macros and tables
Eric Biggers [Wed, 11 Jan 2017 01:04:54 +0000 (17:04 -0800)]
ANDROID: crypto: gf128mul - Refactor gf128 overflow macros and tables

Rename and clean up the GF(2^128) overflow macros and tables.  Their
usage is more general than the name suggested, e.g. what was previously
known as the "bbe" table can actually be used for both "bbe" and "ble"
multiplication.

Bug: 32975945
Signed-off-by: Eric Biggers <ebiggers@google.com>
Change-Id: Ie6c47b4075ca40031eb1767e9b468cfd7bf1b2e4

7 years agoUPSTREAM: crypto: gf128mul - Zero memory when freeing multiplication table
Alex Cope [Mon, 14 Nov 2016 19:02:54 +0000 (11:02 -0800)]
UPSTREAM: crypto: gf128mul - Zero memory when freeing multiplication table

GF(2^128) multiplication tables are typically used for secret
information, so it's a good idea to zero them on free.

Signed-off-by: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
(cherry-picked from 75aa0a7cafe951538c7cb7c5ed457a3371ec5bcd)
Bug: 32975945
Signed-off-by: Eric Biggers <ebiggers@google.com>
Change-Id: I37b1ae9544158007f9ee2caf070120f4a42153ab

7 years agoANDROID: crypto: shash - Add crypto_grab_shash() and crypto_spawn_shash_alg()
Eric Biggers [Wed, 11 Jan 2017 00:47:49 +0000 (16:47 -0800)]
ANDROID: crypto: shash - Add crypto_grab_shash() and crypto_spawn_shash_alg()

Analogous to crypto_grab_skcipher() and crypto_spawn_skcipher_alg(),
these are useful for algorithms that need to use a shash sub-algorithm,
possibly in addition to other sub-algorithms.

Bug: 32975945
Signed-off-by: Eric Biggers <ebiggers@google.com>
Change-Id: I44e5a519d73f5f839e3b6ecbf8c66e36ec569557

7 years agoANDROID: crypto: allow blkcipher walks over ablkcipher data
Eric Biggers [Wed, 11 Jan 2017 00:47:49 +0000 (16:47 -0800)]
ANDROID: crypto: allow blkcipher walks over ablkcipher data

Add a function blkcipher_ablkcipher_walk_virt() which allows ablkcipher
algorithms to use the blkcipher_walk API to walk over their data.  This
will be used by the HEH algorithm, which to support asynchronous ECB
algorithms will be an ablkcipher, but it also needs to make other passes
over the data.

Bug: 32975945
Signed-off-by: Eric Biggers <ebiggers@google.com>
Change-Id: I05f9a0e5473ba6115fcc72d5122d6b0b18b2078b

7 years agoUPSTREAM: arm/arm64: crypto: assure that ECB modes don't require an IV
Jeremy Linton [Fri, 12 Feb 2016 15:47:52 +0000 (09:47 -0600)]
UPSTREAM: arm/arm64: crypto: assure that ECB modes don't require an IV

ECB modes don't use an initialization vector. The kernel
/proc/crypto interface doesn't reflect this properly.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from bee038a4bd2efe8188cc80dfdad706a9abe568ad)
Signed-off-by: Eric Biggers <ebiggers@google.com>
Change-Id: Ief9558d2b41be58a2d845d2033a141b5ef7b585f

7 years agoANDROID: Refactor fs readpage/write tracepoints.
Mohan Srinivasan [Fri, 3 Feb 2017 23:48:03 +0000 (15:48 -0800)]
ANDROID: Refactor fs readpage/write tracepoints.

Refactor the fs readpage/write tracepoints to move the
inode->path lookup outside the tracepoint code, and pass a pointer
to the path into the tracepoint code instead. This is necessary
because the tracepoint code runs non-preemptible. Thanks to
Trilok Soni for catching this in 4.4.

Change-Id: I7486c5947918d155a30c61d6b9cd5027cf8fbe15
Signed-off-by: Mohan Srinivasan <srmohan@google.com>
7 years agoSquashfs: optimize reading uncompressed data
Adrien Schildknecht [Thu, 29 Sep 2016 22:25:30 +0000 (15:25 -0700)]
Squashfs: optimize reading uncompressed data

When dealing with uncompressed data, there is no need to read a whole
block (default 128K) to get the desired page: the pages are
independent from each others.

This patch change the readpages logic so that reading uncompressed
data only read the number of pages advised by the readahead algorithm.

Moreover, if the page actor contains holes (i.e. pages that are already
up-to-date), squashfs skips the buffer_head associated to those pages.

This patch greatly improve the performance of random reads for
uncompressed files because squashfs only read what is needed. It also
reduces the number of unnecessary reads.

Signed-off-by: Adrien Schildknecht <adriens@google.com>
Change-Id: I1850150fbf4b45c9dd138d88409fea1ab44054c0

7 years agoSquashfs: implement .readpages()
Adrien Schildknecht [Mon, 7 Nov 2016 20:41:42 +0000 (12:41 -0800)]
Squashfs: implement .readpages()

Squashfs does not implement .readpages(), so the kernel just repeatedly
calls .readpage().

The readpages function tries to pack as much pages as possible in the
same page actor so that only 1 read request is issued.

Now that the read requests are asynchronous, the kernel can truly
prefetch pages using its readahead algorithm.

Signed-off-by: Adrien Schildknecht <adriens@google.com>
Change-Id: Ice70e029dc24526f61e4e5a1a902588be2212498

7 years agoSquashfs: replace buffer_head with BIO
Adrien Schildknecht [Mon, 7 Nov 2016 20:37:55 +0000 (12:37 -0800)]
Squashfs: replace buffer_head with BIO

The 'll_rw_block' has been deprecated and BIO is now the basic container
for block I/O within the kernel.

Switching to BIO offers 2 advantages:
  1/ It removes synchronous wait for the up-to-date buffers: SquashFS
     now deals with decompressions/copies asynchronously.
     Implementing an asynchronous mechanism to read data is needed to
     efficiently implement .readpages().
  2/ Prior to this patch, merging the read requests entirely depends on
     the IO scheduler. SquashFS has more information than the IO
     scheduler about what could be merged. Moreover, merging the reads
     at the FS level means that we rely less on the IO scheduler.

Signed-off-by: Adrien Schildknecht <adriens@google.com>
Change-Id: I775d2e11f017476e1899518ab52d9d0a8a0bce28

7 years agoSquashfs: refactor page_actor
Adrien Schildknecht [Wed, 28 Sep 2016 20:59:18 +0000 (13:59 -0700)]
Squashfs: refactor page_actor

This patch essentially does 3 things:
  1/ Always use an array of page to store the data instead of a mix of
     buffers and pages.
  2/ It is now possible to have 'holes' in a page actor, i.e. NULL
     pages in the array.
     When reading a block (default 128K), squashfs tries to grab all
     the pages covering this block. If a single page is up-to-date or
     locked, it falls back to using an intermediate buffer to do the
     read and then copy the pages in the actor. Allowing holes in the
     page actor remove the need for this intermediate buffer.
  3/ Refactor the wrappers to share code that deals with page actors.

Signed-off-by: Adrien Schildknecht <adriens@google.com>
Change-Id: I98128bed5d518cf31b67e788a85b275e9a323bec

7 years agoSquashfs: remove the FILE_CACHE option
Adrien Schildknecht [Wed, 28 Sep 2016 19:14:39 +0000 (12:14 -0700)]
Squashfs: remove the FILE_CACHE option

FILE_DIRECT is working fine and offers faster results and lower memory
footprint.

Removing FILE_CACHE makes our life easier because we don't have to
maintain 2 differents function that does the same thing.

Signed-off-by: Adrien Schildknecht <adriens@google.com>
Change-Id: I6689ba74d0042c222a806f9edc539995e8e04c6b

7 years agoANDROID: android-recommended.cfg: CONFIG_CPU_SW_DOMAIN_PAN=y
Sami Tolvanen [Mon, 6 Feb 2017 22:27:24 +0000 (14:27 -0800)]
ANDROID: android-recommended.cfg: CONFIG_CPU_SW_DOMAIN_PAN=y

Bug: 31374660
Change-Id: Id2710a5fa2694da66d3f34cbcc0c2a58a006cec5
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
7 years agoFROMLIST: 9p: fix a potential acl leak
Cong Wang [Tue, 13 Dec 2016 18:33:34 +0000 (10:33 -0800)]
FROMLIST: 9p: fix a potential acl leak

(https://lkml.org/lkml/2016/12/13/579)

posix_acl_update_mode() could possibly clear 'acl', if so
we leak the memory pointed by 'acl'. Save this pointer
before calling posix_acl_update_mode() and release the memory
if 'acl' really gets cleared.

Reported-by: Mark Salyzyn <salyzyn@android.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Greg Kurz <groug@kaod.org>
Cc: Eric Van Hensbergen <ericvh@gmail.com>
Cc: Ron Minnich <rminnich@sandia.gov>
Cc: Latchesar Ionkov <lucho@ionkov.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Bug: 32458736
Change-Id: Ia78da401e6fd1bfd569653bd2cd0ebd3f9c737a0

7 years agoUPSTREAM: arm64: Allow hw watchpoint of length 3,5,6 and 7
Pratyush Anand [Mon, 14 Nov 2016 14:02:45 +0000 (19:32 +0530)]
UPSTREAM: arm64: Allow hw watchpoint of length 3,5,6 and 7

(cherry picked from commit 0ddb8e0b784ba034f3096d5a54684d0d73155e2a)

Since, arm64 can support all offset within a double word limit. Therefore,
now support other lengths within that range as well.

Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Pavel Labath <labath@google.com>
Change-Id: Ibcb263a3903572336ccbf96e0180d3990326545a
Bug: 30919905

7 years agoBACKPORT: arm64: hw_breakpoint: Handle inexact watchpoint addresses
Pavel Labath [Mon, 14 Nov 2016 14:02:44 +0000 (19:32 +0530)]
BACKPORT: arm64: hw_breakpoint: Handle inexact watchpoint addresses

(cherry picked from commit fdfeff0f9e3d9be2b68fa02566017ffc581ae17b)

Arm64 hardware does not always report a watchpoint hit address that
matches one of the watchpoints set. It can also report an address
"near" the watchpoint if a single instruction access both watched and
unwatched addresses. There is no straight-forward way, short of
disassembling the offending instruction, to map that address back to
the watchpoint.

Previously, when the hardware reported a watchpoint hit on an address
that did not match our watchpoint (this happens in case of instructions
which access large chunks of memory such as "stp") the process would
enter a loop where we would be continually resuming it (because we did
not recognise that watchpoint hit) and it would keep hitting the
watchpoint again and again. The tracing process would never get
notified of the watchpoint hit.

This commit fixes the problem by looking at the watchpoints near the
address reported by the hardware. If the address does not exactly match
one of the watchpoints we have set, it attributes the hit to the
nearest watchpoint we have.  This heuristic is a bit dodgy, but I don't
think we can do much more, given the hardware limitations.

Signed-off-by: Pavel Labath <labath@google.com>
[panand: reworked to rebase on his patches]
Signed-off-by: Pratyush Anand <panand@redhat.com>
[will: use __ffs instead of ffs - 1]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Pavel Labath <labath@google.com>
[pavel: trivial fixup in hw_breakpoint.c:watchpoint_handler]
Change-Id: I714dfaa3947d89d89a9e9a1ea84914d44ba0faa3
Bug: 30919905

7 years agoUPSTREAM: arm64: Allow hw watchpoint at varied offset from base address
Pratyush Anand [Mon, 14 Nov 2016 14:02:43 +0000 (19:32 +0530)]
UPSTREAM: arm64: Allow hw watchpoint at varied offset from base address

ARM64 hardware supports watchpoint at any double word aligned address.
However, it can select any consecutive bytes from offset 0 to 7 from that
base address. For example, if base address is programmed as 0x420030 and
byte select is 0x1C, then access of 0x420032,0x420033 and 0x420034 will
generate a watchpoint exception.

Currently, we do not have such modularity. We can only program byte,
halfword, word and double word access exception from any base address.

This patch adds support to overcome above limitations.

Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Pavel Labath <labath@google.com>
Change-Id: I28b1ca63f63182c10c3d6b6b3bacf6c56887ddbe
Bug: 30919905

7 years agoBACKPORT: hw_breakpoint: Allow watchpoint of length 3,5,6 and 7
Pratyush Anand [Mon, 14 Nov 2016 14:02:42 +0000 (19:32 +0530)]
BACKPORT: hw_breakpoint: Allow watchpoint of length 3,5,6 and 7

(cherry picked from commit 651be3cb085341a21847e47c694c249c3e1e4e5b)

We only support breakpoint/watchpoint of length 1, 2, 4 and 8. If we can
support other length as well, then user may watch more data with less
number of watchpoints (provided hardware supports it). For example: if we
have to watch only 4th, 5th and 6th byte from a 64 bit aligned address, we
will have to use two slots to implement it currently. One slot will watch a
half word at offset 4 and other a byte at offset 6. If we can have a
watchpoint of length 3 then we can watch it with single slot as well.

ARM64 hardware does support such functionality, therefore adding these new
definitions in generic layer.

Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Pavel Labath <labath@google.com>
[pavel: tools/include/uapi/linux/hw_breakpoint.h is not present in this branch]
Change-Id: Ie17ed89ca526e4fddf591bb4e556fdfb55fc2eac
Bug: 30919905

7 years agoMerge branch 'linux-linaro-lsk-v4.4' into linux-linaro-lsk-v4.4-android
Alex Shi [Sun, 9 Apr 2017 04:01:26 +0000 (12:01 +0800)]
Merge branch 'linux-linaro-lsk-v4.4' into linux-linaro-lsk-v4.4-android

7 years ago Merge tag 'v4.4.60' into linux-linaro-lsk-v4.4
Alex Shi [Sun, 9 Apr 2017 04:01:24 +0000 (12:01 +0800)]
 Merge tag 'v4.4.60' into linux-linaro-lsk-v4.4

 This is the 4.4.60 stable release

7 years agoLinux 4.4.60
Greg Kroah-Hartman [Sat, 8 Apr 2017 07:53:53 +0000 (09:53 +0200)]
Linux 4.4.60

7 years agopadata: avoid race in reordering
Jason A. Donenfeld [Thu, 23 Mar 2017 11:24:43 +0000 (12:24 +0100)]
padata: avoid race in reordering

commit de5540d088fe97ad583cc7d396586437b32149a5 upstream.

Under extremely heavy uses of padata, crashes occur, and with list
debugging turned on, this happens instead:

[87487.298728] WARNING: CPU: 1 PID: 882 at lib/list_debug.c:33
__list_add+0xae/0x130
[87487.301868] list_add corruption. prev->next should be next
(ffffb17abfc043d0), but was ffff8dba70872c80. (prev=ffff8dba70872b00).
[87487.339011]  [<ffffffff9a53d075>] dump_stack+0x68/0xa3
[87487.342198]  [<ffffffff99e119a1>] ? console_unlock+0x281/0x6d0
[87487.345364]  [<ffffffff99d6b91f>] __warn+0xff/0x140
[87487.348513]  [<ffffffff99d6b9aa>] warn_slowpath_fmt+0x4a/0x50
[87487.351659]  [<ffffffff9a58b5de>] __list_add+0xae/0x130
[87487.354772]  [<ffffffff9add5094>] ? _raw_spin_lock+0x64/0x70
[87487.357915]  [<ffffffff99eefd66>] padata_reorder+0x1e6/0x420
[87487.361084]  [<ffffffff99ef0055>] padata_do_serial+0xa5/0x120

padata_reorder calls list_add_tail with the list to which its adding
locked, which seems correct:

spin_lock(&squeue->serial.lock);
list_add_tail(&padata->list, &squeue->serial.list);
spin_unlock(&squeue->serial.lock);

This therefore leaves only place where such inconsistency could occur:
if padata->list is added at the same time on two different threads.
This pdata pointer comes from the function call to
padata_get_next(pd), which has in it the following block:

next_queue = per_cpu_ptr(pd->pqueue, cpu);
padata = NULL;
reorder = &next_queue->reorder;
if (!list_empty(&reorder->list)) {
       padata = list_entry(reorder->list.next,
                           struct padata_priv, list);
       spin_lock(&reorder->lock);
       list_del_init(&padata->list);
       atomic_dec(&pd->reorder_objects);
       spin_unlock(&reorder->lock);

       pd->processed++;

       goto out;
}
out:
return padata;

I strongly suspect that the problem here is that two threads can race
on reorder list. Even though the deletion is locked, call to
list_entry is not locked, which means it's feasible that two threads
pick up the same padata object and subsequently call list_add_tail on
them at the same time. The fix is thus be hoist that lock outside of
that block.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agoblk: Ensure users for current->bio_list can see the full list.
NeilBrown [Fri, 10 Mar 2017 06:00:47 +0000 (17:00 +1100)]
blk: Ensure users for current->bio_list can see the full list.

commit f5fe1b51905df7cfe4fdfd85c5fb7bc5b71a094f upstream.

Commit 79bd99596b73 ("blk: improve order of bio handling in generic_make_request()")
changed current->bio_list so that it did not contain *all* of the
queued bios, but only those submitted by the currently running
make_request_fn.

There are two places which walk the list and requeue selected bios,
and others that check if the list is empty.  These are no longer
correct.

So redefine current->bio_list to point to an array of two lists, which
contain all queued bios, and adjust various code to test or walk both
lists.

Signed-off-by: NeilBrown <neilb@suse.com>
Fixes: 79bd99596b73 ("blk: improve order of bio handling in generic_make_request()")
Signed-off-by: Jens Axboe <axboe@fb.com>
[jwang: backport to 4.4]
Signed-off-by: Jack Wang <jinpu.wang@profitbricks.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[bwh: Restore changes in device-mapper from upstream version]
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
7 years agoblk: improve order of bio handling in generic_make_request()
NeilBrown [Tue, 7 Mar 2017 20:38:05 +0000 (07:38 +1100)]
blk: improve order of bio handling in generic_make_request()

commit 79bd99596b7305ab08109a8bf44a6a4511dbf1cd upstream.

To avoid recursion on the kernel stack when stacked block devices
are in use, generic_make_request() will, when called recursively,
queue new requests for later handling.  They will be handled when the
make_request_fn for the current bio completes.

If any bios are submitted by a make_request_fn, these will ultimately
be handled seqeuntially.  If the handling of one of those generates
further requests, they will be added to the end of the queue.

This strict first-in-first-out behaviour can lead to deadlocks in
various ways, normally because a request might need to wait for a
previous request to the same device to complete.  This can happen when
they share a mempool, and can happen due to interdependencies
particular to the device.  Both md and dm have examples where this happens.

These deadlocks can be erradicated by more selective ordering of bios.
Specifically by handling them in depth-first order.  That is: when the
handling of one bio generates one or more further bios, they are
handled immediately after the parent, before any siblings of the
parent.  That way, when generic_make_request() calls make_request_fn
for some particular device, we can be certain that all previously
submited requests for that device have been completely handled and are
not waiting for anything in the queue of requests maintained in
generic_make_request().

An easy way to achieve this would be to use a last-in-first-out stack
instead of a queue.  However this will change the order of consecutive
bios submitted by a make_request_fn, which could have unexpected consequences.
Instead we take a slightly more complex approach.
A fresh queue is created for each call to a make_request_fn.  After it completes,
any bios for a different device are placed on the front of the main queue, followed
by any bios for the same device, followed by all bios that were already on
the queue before the make_request_fn was called.
This provides the depth-first approach without reordering bios on the same level.

This, by itself, it not enough to remove all deadlocks.  It just makes
it possible for drivers to take the extra step required themselves.

To avoid deadlocks, drivers must never risk waiting for a request
after submitting one to generic_make_request.  This includes never
allocing from a mempool twice in the one call to a make_request_fn.

A common pattern in drivers is to call bio_split() in a loop, handling
the first part and then looping around to possibly split the next part.
Instead, a driver that finds it needs to split a bio should queue
(with generic_make_request) the second part, handle the first part,
and then return.  The new code in generic_make_request will ensure the
requests to underlying bios are processed first, then the second bio
that was split off.  If it splits again, the same process happens.  In
each case one bio will be completely handled before the next one is attempted.

With this is place, it should be possible to disable the
punt_bios_to_recover() recovery thread for many block devices, and
eventually it may be possible to remove it completely.

Ref: http://www.spinics.net/lists/raid/msg54680.html
Tested-by: Jinpu Wang <jinpu.wang@profitbricks.com>
Inspired-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
[jwang: backport to 4.4]
Signed-off-by: Jack Wang <jinpu.wang@profitbricks.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agopower: reset: at91-poweroff: timely shutdown LPDDR memories
Alexandre Belloni [Tue, 25 Oct 2016 09:37:59 +0000 (11:37 +0200)]
power: reset: at91-poweroff: timely shutdown LPDDR memories

commit 0b0408745e7ff24757cbfd571d69026c0ddb803c upstream.

LPDDR memories can only handle up to 400 uncontrolled power off. Ensure the
proper power off sequence is used before shutting down the platform.

Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Sebastian Reichel <sre@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agoKVM: kvm_io_bus_unregister_dev() should never fail
David Hildenbrand [Thu, 23 Mar 2017 17:24:19 +0000 (18:24 +0100)]
KVM: kvm_io_bus_unregister_dev() should never fail

commit 90db10434b163e46da413d34db8d0e77404cc645 upstream.

No caller currently checks the return value of
kvm_io_bus_unregister_dev(). This is evil, as all callers silently go on
freeing their device. A stale reference will remain in the io_bus,
getting at least used again, when the iobus gets teared down on
kvm_destroy_vm() - leading to use after free errors.

There is nothing the callers could do, except retrying over and over
again.

So let's simply remove the bus altogether, print an error and make
sure no one can access this broken bus again (returning -ENOMEM on any
attempt to access it).

Fixes: e93f8a0f821e ("KVM: convert io_bus to SRCU")
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agortc: s35390a: improve irq handling
Uwe Kleine-König [Sat, 2 Jul 2016 15:28:10 +0000 (17:28 +0200)]
rtc: s35390a: improve irq handling

commit 3bd32722c827d00eafe8e6d5b83e9f3148ea7c7e upstream.

On some QNAP NAS devices the rtc can wake the machine. Several people
noticed that once the machine was woken this way it fails to shut down.
That's because the driver fails to acknowledge the interrupt and so it
keeps active and restarts the machine immediatly after shutdown. See
https://bugs.debian.org/794266 for a bug report.

Doing this correctly requires to interpret the INT2 flag of the first read
of the STATUS1 register because this bit is cleared by read.

Note this is not maximally robust though because a pending irq isn't
detected when the STATUS1 register was already read (and so INT2 is not
set) but the irq was not disabled. But that is a hardware imposed problem
that cannot easily be fixed by software.

Signed-off-by: Uwe Kleine-König <uwe@kleine-koenig.org>
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agortc: s35390a: implement reset routine as suggested by the reference
Uwe Kleine-König [Sat, 2 Jul 2016 15:28:09 +0000 (17:28 +0200)]
rtc: s35390a: implement reset routine as suggested by the reference

commit 8e6583f1b5d1f5f129b873f1428b7e414263d847 upstream.

There were two deviations from the reference manual: you have to wait
half a second when POC is active and you might have to repeat
initialization when POC or BLD are still set after the sequence.

Note however that as POC and BLD are cleared by read the driver might
not be able to detect that a reset is necessary. I don't have a good
idea how to fix this.

Additionally report the value read from STATUS1 to the caller. This
prepares the next patch.

Signed-off-by: Uwe Kleine-König <uwe@kleine-koenig.org>
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agortc: s35390a: make sure all members in the output are set
Uwe Kleine-König [Mon, 3 Apr 2017 21:32:38 +0000 (23:32 +0200)]
rtc: s35390a: make sure all members in the output are set

The rtc core calls the .read_alarm with all fields initialized to 0. As
the s35390a driver doesn't touch some fields the returned date is
interpreted as a date in January 1900. So make sure all fields are set
to -1; some of them are then overwritten with the right data depending
on the hardware state.

In mainline this is done by commit d68778b80dd7 ("rtc: initialize output
parameter for read alarm to "uninitialized"") in the core. This is
considered to dangerous for stable as it might have side effects for
other rtc drivers that might for example rely on alarm->time.tm_sec
being initialized to 0.

Signed-off-by: Uwe Kleine-König <uwe@kleine-koenig.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agortc: s35390a: fix reading out alarm
Uwe Kleine-König [Sat, 2 Jul 2016 15:28:08 +0000 (17:28 +0200)]
rtc: s35390a: fix reading out alarm

commit f87e904ddd8f0ef120e46045b0addeb1cc88354e upstream.

There are several issues fixed in this patch:

 - When alarm isn't enabled, set .enabled to zero instead of returning
   -EINVAL.
 - Ignore how IRQ1 is configured when determining if IRQ2 is on.
 - The three alarm registers have an enable flag which must be
   evaluated.
 - The chip always triggers when the seconds register gets 0.

Note that the rtc framework however doesn't handle the result correctly
because it doesn't check wday being initialized and so interprets an
alarm being set for 10:00 AM in three days as 10:00 AM tomorrow (or
today if that's not over yet).

Signed-off-by: Uwe Kleine-König <uwe@kleine-koenig.org>
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agoMIPS: Lantiq: Fix cascaded IRQ setup
Felix Fietkau [Thu, 19 Jan 2017 11:28:22 +0000 (12:28 +0100)]
MIPS: Lantiq: Fix cascaded IRQ setup

commit 6c356eda225e3ee134ed4176b9ae3a76f793f4dd upstream.

With the IRQ stack changes integrated, the XRX200 devices started
emitting a constant stream of kernel messages like this:

[  565.415310] Spurious IRQ: CAUSE=0x1100c300

This is caused by IP0 getting handled by plat_irq_dispatch() rather than
its vectored interrupt handler, which is fixed by commit de856416e714
("MIPS: IRQ Stack: Fix erroneous jal to plat_irq_dispatch").

Fix plat_irq_dispatch() to handle non-vectored IPI interrupts correctly
by setting up IP2-6 as proper chained IRQ handlers and calling do_IRQ
for all MIPS CPU interrupts.

Signed-off-by: Felix Fietkau <nbd@nbd.name>
Acked-by: John Crispin <john@phrozen.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/15077/
[james.hogan@imgtec.com: tweaked commit message]
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agomm, hugetlb: use pte_present() instead of pmd_present() in follow_huge_pmd()
Naoya Horiguchi [Fri, 31 Mar 2017 22:11:55 +0000 (15:11 -0700)]
mm, hugetlb: use pte_present() instead of pmd_present() in follow_huge_pmd()

commit c9d398fa237882ea07167e23bcfc5e6847066518 upstream.

I found the race condition which triggers the following bug when
move_pages() and soft offline are called on a single hugetlb page
concurrently.

    Soft offlining page 0x119400 at 0x700000000000
    BUG: unable to handle kernel paging request at ffffea0011943820
    IP: follow_huge_pmd+0x143/0x190
    PGD 7ffd2067
    PUD 7ffd1067
    PMD 0
        [61163.582052] Oops: 0000 [#1] SMP
    Modules linked in: binfmt_misc ppdev virtio_balloon parport_pc pcspkr i2c_piix4 parport i2c_core acpi_cpufreq ip_tables xfs libcrc32c ata_generic pata_acpi virtio_blk 8139too crc32c_intel ata_piix serio_raw libata virtio_pci 8139cp virtio_ring virtio mii floppy dm_mirror dm_region_hash dm_log dm_mod [last unloaded: cap_check]
    CPU: 0 PID: 22573 Comm: iterate_numa_mo Tainted: P           OE   4.11.0-rc2-mm1+ #2
    Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
    RIP: 0010:follow_huge_pmd+0x143/0x190
    RSP: 0018:ffffc90004bdbcd0 EFLAGS: 00010202
    RAX: 0000000465003e80 RBX: ffffea0004e34d30 RCX: 00003ffffffff000
    RDX: 0000000011943800 RSI: 0000000000080001 RDI: 0000000465003e80
    RBP: ffffc90004bdbd18 R08: 0000000000000000 R09: ffff880138d34000
    R10: ffffea0004650000 R11: 0000000000c363b0 R12: ffffea0011943800
    R13: ffff8801b8d34000 R14: ffffea0000000000 R15: 000077ff80000000
    FS:  00007fc977710740(0000) GS:ffff88007dc00000(0000) knlGS:0000000000000000
    CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    CR2: ffffea0011943820 CR3: 000000007a746000 CR4: 00000000001406f0
    Call Trace:
     follow_page_mask+0x270/0x550
     SYSC_move_pages+0x4ea/0x8f0
     SyS_move_pages+0xe/0x10
     do_syscall_64+0x67/0x180
     entry_SYSCALL64_slow_path+0x25/0x25
    RIP: 0033:0x7fc976e03949
    RSP: 002b:00007ffe72221d88 EFLAGS: 00000246 ORIG_RAX: 0000000000000117
    RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fc976e03949
    RDX: 0000000000c22390 RSI: 0000000000001400 RDI: 0000000000005827
    RBP: 00007ffe72221e00 R08: 0000000000c2c3a0 R09: 0000000000000004
    R10: 0000000000c363b0 R11: 0000000000000246 R12: 0000000000400650
    R13: 00007ffe72221ee0 R14: 0000000000000000 R15: 0000000000000000
    Code: 81 e4 ff ff 1f 00 48 21 c2 49 c1 ec 0c 48 c1 ea 0c 4c 01 e2 49 bc 00 00 00 00 00 ea ff ff 48 c1 e2 06 49 01 d4 f6 45 bc 04 74 90 <49> 8b 7c 24 20 40 f6 c7 01 75 2b 4c 89 e7 8b 47 1c 85 c0 7e 2a
    RIP: follow_huge_pmd+0x143/0x190 RSP: ffffc90004bdbcd0
    CR2: ffffea0011943820
    ---[ end trace e4f81353a2d23232 ]---
    Kernel panic - not syncing: Fatal exception
    Kernel Offset: disabled

This bug is triggered when pmd_present() returns true for non-present
hugetlb, so fixing the present check in follow_huge_pmd() prevents it.
Using pmd_present() to determine present/non-present for hugetlb is not
correct, because pmd_present() checks multiple bits (not only
_PAGE_PRESENT) for historical reason and it can misjudge hugetlb state.

Fixes: e66f17ff7177 ("mm/hugetlb: take page table lock in follow_huge_pmd()")
Link: http://lkml.kernel.org/r/1490149898-20231-1-git-send-email-n-horiguchi@ah.jp.nec.com
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agodrm/radeon: Override fpfn for all VRAM placements in radeon_evict_flags
Michel Dänzer [Fri, 24 Mar 2017 10:01:09 +0000 (19:01 +0900)]
drm/radeon: Override fpfn for all VRAM placements in radeon_evict_flags

commit ce4b4f228e51219b0b79588caf73225b08b5b779 upstream.

We were accidentally only overriding the first VRAM placement. For BOs
with the RADEON_GEM_NO_CPU_ACCESS flag set,
radeon_ttm_placement_from_domain creates a second VRAM placment with
fpfn == 0. If VRAM is almost full, the first VRAM placement with
fpfn > 0 may not work, but the second one with fpfn == 0 always will
(the BO's current location trivially satisfies it). Because "moving"
the BO to its current location puts it back on the LRU list, this
results in an infinite loop.

Fixes: 2a85aedd117c ("drm/radeon: Try evicting from CPU accessible to
                      inaccessible VRAM first")
Reported-by: Zachary Michaels <zmichaels@oblong.com>
Reported-and-Tested-by: Julien Isorce <jisorce@oblong.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agoKVM: x86: clear bus pointer when destroyed
Peter Xu [Wed, 15 Mar 2017 08:01:17 +0000 (16:01 +0800)]
KVM: x86: clear bus pointer when destroyed

commit df630b8c1e851b5e265dc2ca9c87222e342c093b upstream.

When releasing the bus, let's clear the bus pointers to mark it out. If
any further device unregister happens on this bus, we know that we're
done if we found the bus being released already.

Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agoUSB: fix linked-list corruption in rh_call_control()
Alan Stern [Fri, 24 Mar 2017 17:38:28 +0000 (13:38 -0400)]
USB: fix linked-list corruption in rh_call_control()

commit 1633682053a7ee8058e10c76722b9b28e97fb73f upstream.

Using KASAN, Dmitry found a bug in the rh_call_control() routine: If
buffer allocation fails, the routine returns immediately without
unlinking its URB from the control endpoint, eventually leading to
linked-list corruption.

This patch fixes the problem by jumping to the end of the routine
(where the URB is unlinked) when an allocation failure occurs.

Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Reported-and-tested-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agotty/serial: atmel: fix TX path in atmel_console_write()
Nicolas Ferre [Mon, 20 Mar 2017 15:38:57 +0000 (16:38 +0100)]
tty/serial: atmel: fix TX path in atmel_console_write()

commit 497e1e16f45c70574dc9922c7f75c642c2162119 upstream.

A side effect of 89d8232411a8 ("tty/serial: atmel_serial: BUG: stop DMA
from transmitting in stop_tx") is that the console can be called with
TX path disabled. Then the system would hang trying to push charecters
out in atmel_console_putchar().

Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Fixes: 89d8232411a8 ("tty/serial: atmel_serial: BUG: stop DMA from transmitting in stop_tx")
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agotty/serial: atmel: fix race condition (TX+DMA)
Richard Genoud [Mon, 20 Mar 2017 10:52:41 +0000 (11:52 +0100)]
tty/serial: atmel: fix race condition (TX+DMA)

commit 31ca2c63fdc0aee725cbd4f207c1256f5deaabde upstream.

If uart_flush_buffer() is called between atmel_tx_dma() and
atmel_complete_tx_dma(), the circular buffer has been cleared, but not
atmel_port->tx_len.
That leads to a circular buffer overflow (dumping (UART_XMIT_SIZE -
atmel_port->tx_len) bytes).

Tested-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agoACPI: Do not create a platform_device for IOAPIC/IOxAPIC
Joerg Roedel [Wed, 22 Mar 2017 17:33:25 +0000 (18:33 +0100)]
ACPI: Do not create a platform_device for IOAPIC/IOxAPIC

commit 08f63d97749185fab942a3a47ed80f5bd89b8b7d upstream.

No platform-device is required for IO(x)APICs, so don't even
create them.

[ rjw: This fixes a problem with leaking platform device objects
  after IOAPIC/IOxAPIC hot-removal events.]

Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agoACPI: Fix incompatibility with mcount-based function graph tracing
Josh Poimboeuf [Thu, 16 Mar 2017 13:56:28 +0000 (08:56 -0500)]
ACPI: Fix incompatibility with mcount-based function graph tracing

commit 61b79e16c68d703dde58c25d3935d67210b7d71b upstream.

Paul Menzel reported a warning:

  WARNING: CPU: 0 PID: 774 at /build/linux-ROBWaj/linux-4.9.13/kernel/trace/trace_functions_graph.c:233 ftrace_return_to_handler+0x1aa/0x1e0
  Bad frame pointer: expected f6919d98, received f6919db0
    from func acpi_pm_device_sleep_wake return to c43b6f9d

The warning means that function graph tracing is broken for the
acpi_pm_device_sleep_wake() function.  That's because the ACPI Makefile
unconditionally sets the '-Os' gcc flag to optimize for size.  That's an
issue because mcount-based function graph tracing is incompatible with
'-Os' on x86, thanks to the following gcc bug:

  https://gcc.gnu.org/bugzilla/show_bug.cgi?id=42109

I have another patch pending which will ensure that mcount-based
function graph tracing is never used with CONFIG_CC_OPTIMIZE_FOR_SIZE on
x86.

But this patch is needed in addition to that one because the ACPI
Makefile overrides that config option for no apparent reason.  It has
had this flag since the beginning of git history, and there's no related
comment, so I don't know why it's there.  As far as I can tell, there's
no reason for it to be there.  The appropriate behavior is for it to
honor CONFIG_CC_OPTIMIZE_FOR_{SIZE,PERFORMANCE} like the rest of the
kernel.

Reported-by: Paul Menzel <pmenzel@molgen.mpg.de>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agoASoC: atmel-classd: fix audio clock rate
Songjun Wu [Fri, 24 Feb 2017 07:10:43 +0000 (15:10 +0800)]
ASoC: atmel-classd: fix audio clock rate

commit cd3ac9affc43b44f49d7af70d275f0bd426ba643 upstream.

Fix the audio clock rate according to the datasheet.

Reported-by: Dushara Jayasinghe <dushara@successful.com.au>
Signed-off-by: Songjun Wu <songjun.wu@microchip.com>
Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agoALSA: hda - fix a problem for lineout on a Dell AIO machine
Hui Wang [Fri, 31 Mar 2017 02:31:40 +0000 (10:31 +0800)]
ALSA: hda - fix a problem for lineout on a Dell AIO machine

commit 2f726aec19a9d2c63bec9a8a53a3910ffdcd09f8 upstream.

On this Dell AIO machine, the lineout jack does not work.

We found the pin 0x1a is assigned to lineout on this machine, and in
the past, we applied ALC298_FIXUP_DELL1_MIC_NO_PRESENCE to fix the
heaset-set mic problem for this machine, this fixup will redefine
the pin 0x1a to headphone-mic, as a result the lineout doesn't
work anymore.

After consulting with Dell, they told us this machine doesn't support
microphone via headset jack, so we add a new fixup which only defines
the pin 0x18 as the headset-mic.

[rearranged the fixup insertion position by tiwai in order to make the
 merge with other branches easier -- tiwai]

Fixes: 59ec4b57bcae ("ALSA: hda - Fix headset mic detection problem for two dell machines")
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agoALSA: seq: Fix race during FIFO resize
Takashi Iwai [Fri, 24 Mar 2017 16:07:57 +0000 (17:07 +0100)]
ALSA: seq: Fix race during FIFO resize

commit 2d7d54002e396c180db0c800c1046f0a3c471597 upstream.

When a new event is queued while processing to resize the FIFO in
snd_seq_fifo_clear(), it may lead to a use-after-free, as the old pool
that is being queued gets removed.  For avoiding this race, we need to
close the pool to be deleted and sync its usage before actually
deleting it.

The issue was spotted by syzkaller.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agoscsi: libsas: fix ata xfer length
John Garry [Thu, 16 Mar 2017 15:07:28 +0000 (23:07 +0800)]
scsi: libsas: fix ata xfer length

commit 9702c67c6066f583b629cf037d2056245bb7a8e6 upstream.

The total ata xfer length may not be calculated properly, in that we do
not use the proper method to get an sg element dma length.

According to the code comment, sg_dma_len() should be used after
dma_map_sg() is called.

This issue was found by turning on the SMMUv3 in front of the hisi_sas
controller in hip07. Multiple sg elements were being combined into a
single element, but the original first element length was being use as
the total xfer length.

Fixes: ff2aeb1eb64c8a4770a6 ("libata: convert to chained sg")
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agoscsi: sg: check length passed to SG_NEXT_CMD_LEN
peter chang [Wed, 15 Feb 2017 22:11:54 +0000 (14:11 -0800)]
scsi: sg: check length passed to SG_NEXT_CMD_LEN

commit bf33f87dd04c371ea33feb821b60d63d754e3124 upstream.

The user can control the size of the next command passed along, but the
value passed to the ioctl isn't checked against the usable max command
size.

Signed-off-by: Peter Chang <dpf@google.com>
Acked-by: Douglas Gilbert <dgilbert@interlog.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agoscsi: mpt3sas: fix hang on ata passthrough commands
James Bottomley [Sun, 1 Jan 2017 17:39:24 +0000 (09:39 -0800)]
scsi: mpt3sas: fix hang on ata passthrough commands

commit ffb58456589443ca572221fabbdef3db8483a779 upstream.

mpt3sas has a firmware failure where it can only handle one pass through
ATA command at a time.  If another comes in, contrary to the SAT
standard, it will hang until the first one completes (causing long
commands like secure erase to timeout).  The original fix was to block
the device when an ATA command came in, but this caused a regression
with

commit 669f044170d8933c3d66d231b69ea97cb8447338
Author: Bart Van Assche <bart.vanassche@sandisk.com>
Date:   Tue Nov 22 16:17:13 2016 -0800

    scsi: srp_transport: Move queuecommand() wait code to SCSI core

So fix the original fix of the secure erase timeout by properly
returning SAM_STAT_BUSY like the SAT recommends.  The original patch
also had a concurrency problem since scsih_qcmd is lockless at that
point (this is fixed by using atomic bitops to set and test the flag).

[mkp: addressed feedback wrt. test_bit and fixed whitespace]

Fixes: 18f6084a989ba1b (mpt3sas: Fix secure erase premature termination)
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Acked-by: Sreekanth Reddy <Sreekanth.Reddy@broadcom.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reported-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Joe Korty <joe.korty@ccur.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agoxen/setup: Don't relocate p2m over existing one
Ross Lagerwall [Mon, 12 Dec 2016 14:35:13 +0000 (14:35 +0000)]
xen/setup: Don't relocate p2m over existing one

commit 7ecec8503af37de6be4f96b53828d640a968705f upstream.

When relocating the p2m, take special care not to relocate it so
that is overlaps with the current location of the p2m/initrd. This is
needed since the full extent of the current location is not marked as a
reserved region in the e820.

This was seen to happen to a dom0 with a large initial p2m and a small
reserved region in the middle of the initial p2m.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agolibceph: force GFP_NOIO for socket allocations
Ilya Dryomov [Tue, 21 Mar 2017 12:44:28 +0000 (13:44 +0100)]
libceph: force GFP_NOIO for socket allocations

commit 633ee407b9d15a75ac9740ba9d3338815e1fcb95 upstream.

sock_alloc_inode() allocates socket+inode and socket_wq with
GFP_KERNEL, which is not allowed on the writeback path:

    Workqueue: ceph-msgr con_work [libceph]
    ffff8810871cb018 0000000000000046 0000000000000000 ffff881085d40000
    0000000000012b00 ffff881025cad428 ffff8810871cbfd8 0000000000012b00
    ffff880102fc1000 ffff881085d40000 ffff8810871cb038 ffff8810871cb148
    Call Trace:
    [<ffffffff816dd629>] schedule+0x29/0x70
    [<ffffffff816e066d>] schedule_timeout+0x1bd/0x200
    [<ffffffff81093ffc>] ? ttwu_do_wakeup+0x2c/0x120
    [<ffffffff81094266>] ? ttwu_do_activate.constprop.135+0x66/0x70
    [<ffffffff816deb5f>] wait_for_completion+0xbf/0x180
    [<ffffffff81097cd0>] ? try_to_wake_up+0x390/0x390
    [<ffffffff81086335>] flush_work+0x165/0x250
    [<ffffffff81082940>] ? worker_detach_from_pool+0xd0/0xd0
    [<ffffffffa03b65b1>] xlog_cil_force_lsn+0x81/0x200 [xfs]
    [<ffffffff816d6b42>] ? __slab_free+0xee/0x234
    [<ffffffffa03b4b1d>] _xfs_log_force_lsn+0x4d/0x2c0 [xfs]
    [<ffffffff811adc1e>] ? lookup_page_cgroup_used+0xe/0x30
    [<ffffffffa039a723>] ? xfs_reclaim_inode+0xa3/0x330 [xfs]
    [<ffffffffa03b4dcf>] xfs_log_force_lsn+0x3f/0xf0 [xfs]
    [<ffffffffa039a723>] ? xfs_reclaim_inode+0xa3/0x330 [xfs]
    [<ffffffffa03a62c6>] xfs_iunpin_wait+0xc6/0x1a0 [xfs]
    [<ffffffff810aa250>] ? wake_atomic_t_function+0x40/0x40
    [<ffffffffa039a723>] xfs_reclaim_inode+0xa3/0x330 [xfs]
    [<ffffffffa039ac07>] xfs_reclaim_inodes_ag+0x257/0x3d0 [xfs]
    [<ffffffffa039bb13>] xfs_reclaim_inodes_nr+0x33/0x40 [xfs]
    [<ffffffffa03ab745>] xfs_fs_free_cached_objects+0x15/0x20 [xfs]
    [<ffffffff811c0c18>] super_cache_scan+0x178/0x180
    [<ffffffff8115912e>] shrink_slab_node+0x14e/0x340
    [<ffffffff811afc3b>] ? mem_cgroup_iter+0x16b/0x450
    [<ffffffff8115af70>] shrink_slab+0x100/0x140
    [<ffffffff8115e425>] do_try_to_free_pages+0x335/0x490
    [<ffffffff8115e7f9>] try_to_free_pages+0xb9/0x1f0
    [<ffffffff816d56e4>] ? __alloc_pages_direct_compact+0x69/0x1be
    [<ffffffff81150cba>] __alloc_pages_nodemask+0x69a/0xb40
    [<ffffffff8119743e>] alloc_pages_current+0x9e/0x110
    [<ffffffff811a0ac5>] new_slab+0x2c5/0x390
    [<ffffffff816d71c4>] __slab_alloc+0x33b/0x459
    [<ffffffff815b906d>] ? sock_alloc_inode+0x2d/0xd0
    [<ffffffff8164bda1>] ? inet_sendmsg+0x71/0xc0
    [<ffffffff815b906d>] ? sock_alloc_inode+0x2d/0xd0
    [<ffffffff811a21f2>] kmem_cache_alloc+0x1a2/0x1b0
    [<ffffffff815b906d>] sock_alloc_inode+0x2d/0xd0
    [<ffffffff811d8566>] alloc_inode+0x26/0xa0
    [<ffffffff811da04a>] new_inode_pseudo+0x1a/0x70
    [<ffffffff815b933e>] sock_alloc+0x1e/0x80
    [<ffffffff815ba855>] __sock_create+0x95/0x220
    [<ffffffff815baa04>] sock_create_kern+0x24/0x30
    [<ffffffffa04794d9>] con_work+0xef9/0x2050 [libceph]
    [<ffffffffa04aa9ec>] ? rbd_img_request_submit+0x4c/0x60 [rbd]
    [<ffffffff81084c19>] process_one_work+0x159/0x4f0
    [<ffffffff8108561b>] worker_thread+0x11b/0x530
    [<ffffffff81085500>] ? create_worker+0x1d0/0x1d0
    [<ffffffff8108b6f9>] kthread+0xc9/0xe0
    [<ffffffff8108b630>] ? flush_kthread_worker+0x90/0x90
    [<ffffffff816e1b98>] ret_from_fork+0x58/0x90
    [<ffffffff8108b630>] ? flush_kthread_worker+0x90/0x90

Use memalloc_noio_{save,restore}() to temporarily force GFP_NOIO here.

Link: http://tracker.ceph.com/issues/19309
Reported-by: Sergey Jerusalimov <wintchester@gmail.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agoMerge branch 'linux-linaro-lsk-v4.4' into linux-linaro-lsk-v4.4-android
Alex Shi [Sat, 1 Apr 2017 04:01:40 +0000 (12:01 +0800)]
Merge branch 'linux-linaro-lsk-v4.4' into linux-linaro-lsk-v4.4-android

7 years ago Merge tag 'v4.4.59' into linux-linaro-lsk-v4.4
Alex Shi [Sat, 1 Apr 2017 04:01:36 +0000 (12:01 +0800)]
 Merge tag 'v4.4.59' into linux-linaro-lsk-v4.4

 This is the 4.4.59 stable release

7 years agoMerge branch 'linux-linaro-lsk-v4.4' into linux-linaro-lsk-v4.4-android
Alex Shi [Sat, 1 Apr 2017 03:15:12 +0000 (11:15 +0800)]
Merge branch 'linux-linaro-lsk-v4.4' into linux-linaro-lsk-v4.4-android

7 years agoMerge branch 'v4.4/topic/mm-kaslr-pax_usercopy' into linux-linaro-lsk-v4.4
Alex Shi [Sat, 1 Apr 2017 03:08:01 +0000 (11:08 +0800)]
Merge branch 'v4.4/topic/mm-kaslr-pax_usercopy' into linux-linaro-lsk-v4.4

To fix build failuer on fs/proc/kcore.c

7 years agofs/proc/kcore.c: Make bounce buffer global for read
Jiri Olsa [Thu, 8 Sep 2016 07:57:07 +0000 (09:57 +0200)]
fs/proc/kcore.c: Make bounce buffer global for read

Next patch adds bounce buffer for ktext area, so it's
convenient to have single bounce buffer for both
vmalloc/module and ktext cases.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit f5beeb1851ea6f8cfcf2657f26cb24c0582b4945)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
7 years agoLinux 4.4.59
Greg Kroah-Hartman [Fri, 31 Mar 2017 08:17:09 +0000 (10:17 +0200)]
Linux 4.4.59

7 years agosched/rt: Add a missing rescheduling point
Sebastian Andrzej Siewior [Tue, 24 Jan 2017 14:40:06 +0000 (15:40 +0100)]
sched/rt: Add a missing rescheduling point

commit 619bd4a71874a8fd78eb6ccf9f272c5e98bcc7b7 upstream.

Since the change in commit:

  fd7a4bed1835 ("sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks")

... we don't reschedule a task under certain circumstances:

Lets say task-A, SCHED_OTHER, is running on CPU0 (and it may run only on
CPU0) and holds a PI lock. This task is removed from the CPU because it
used up its time slice and another SCHED_OTHER task is running. Task-B on
CPU1 runs at RT priority and asks for the lock owned by task-A. This
results in a priority boost for task-A. Task-B goes to sleep until the
lock has been made available. Task-A is already runnable (but not active),
so it receives no wake up.

The reality now is that task-A gets on the CPU once the scheduler decides
to remove the current task despite the fact that a high priority task is
enqueued and waiting. This may take a long time.

The desired behaviour is that CPU0 immediately reschedules after the
priority boost which made task-A the task with the lowest priority.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: fd7a4bed1835 ("sched, rt: Convert switched_{from, to}_rt() prio_changed_rt() to balance callbacks")
Link: http://lkml.kernel.org/r/20170124144006.29821-1-bigeasy@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agofscrypt: remove broken support for detecting keyring key revocation
Eric Biggers [Tue, 21 Feb 2017 23:07:11 +0000 (15:07 -0800)]
fscrypt: remove broken support for detecting keyring key revocation

commit 1b53cf9815bb4744958d41f3795d5d5a1d365e2d upstream.

Filesystem encryption ostensibly supported revoking a keyring key that
had been used to "unlock" encrypted files, causing those files to become
"locked" again.  This was, however, buggy for several reasons, the most
severe of which was that when key revocation happened to be detected for
an inode, its fscrypt_info was immediately freed, even while other
threads could be using it for encryption or decryption concurrently.
This could be exploited to crash the kernel or worse.

This patch fixes the use-after-free by removing the code which detects
the keyring key having been revoked, invalidated, or expired.  Instead,
an encrypted inode that is "unlocked" now simply remains unlocked until
it is evicted from memory.  Note that this is no worse than the case for
block device-level encryption, e.g. dm-crypt, and it still remains
possible for a privileged user to evict unused pages, inodes, and
dentries by running 'sync; echo 3 > /proc/sys/vm/drop_caches', or by
simply unmounting the filesystem.  In fact, one of those actions was
already needed anyway for key revocation to work even somewhat sanely.
This change is not expected to break any applications.

In the future I'd like to implement a real API for fscrypt key
revocation that interacts sanely with ongoing filesystem operations ---
waiting for existing operations to complete and blocking new operations,
and invalidating and sanitizing key material and plaintext from the VFS
caches.  But this is a hard problem, and for now this bug must be fixed.

This bug affected almost all versions of ext4, f2fs, and ubifs
encryption, and it was potentially reachable in any kernel configured
with encryption support (CONFIG_EXT4_ENCRYPTION=y,
CONFIG_EXT4_FS_ENCRYPTION=y, CONFIG_F2FS_FS_ENCRYPTION=y, or
CONFIG_UBIFS_FS_ENCRYPTION=y).  Note that older kernels did not use the
shared fs/crypto/ code, but due to the potential security implications
of this bug, it may still be worthwhile to backport this fix to them.

Fixes: b7236e21d55f ("ext4 crypto: reorganize how we store keys in the inode")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Acked-by: Michael Halcrow <mhalcrow@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agometag/ptrace: Reject partial NT_METAG_RPIPE writes
Dave Martin [Mon, 27 Mar 2017 14:10:57 +0000 (15:10 +0100)]
metag/ptrace: Reject partial NT_METAG_RPIPE writes

commit 7195ee3120d878259e8d94a5d9f808116f34d5ea upstream.

It's not clear what behaviour is sensible when doing partial write of
NT_METAG_RPIPE, so just don't bother.

This patch assumes that userspace will never rely on a partial SETREGSET
in this case, since it's not clear what should happen anyway.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agometag/ptrace: Provide default TXSTATUS for short NT_PRSTATUS
Dave Martin [Mon, 27 Mar 2017 14:10:56 +0000 (15:10 +0100)]
metag/ptrace: Provide default TXSTATUS for short NT_PRSTATUS

commit 5fe81fe98123ce41265c65e95d34418d30d005d1 upstream.

Ensure that if userspace supplies insufficient data to PTRACE_SETREGSET
to fill TXSTATUS, a well-defined default value is used, based on the
task's current value.

Suggested-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agometag/ptrace: Preserve previous registers for short regset write
Dave Martin [Mon, 27 Mar 2017 14:10:55 +0000 (15:10 +0100)]
metag/ptrace: Preserve previous registers for short regset write

commit a78ce80d2c9178351b34d78fec805140c29c193e upstream.

Ensure that if userspace supplies insufficient data to PTRACE_SETREGSET
to fill all the registers, the thread's old registers are preserved.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
7 years agosparc/ptrace: Preserve previous registers for short regset write
Dave Martin [Mon, 27 Mar 2017 14:10:59 +0000 (15:10 +0100)]
sparc/ptrace: Preserve previous registers for short regset write

commit d3805c546b275c8cc7d40f759d029ae92c7175f2 upstream.

Ensure that if userspace supplies insufficient data to PTRACE_SETREGSET
to fill all the registers, the thread's old registers are preserved.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>