Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:19 +0000 (15:38 -0700)]
zram: move comp allocation out of init_lock
While fixing lockdep spew of ->init_lock reported by Sasha Levin [1],
Minchan Kim noted [2] that it's better to move compression backend
allocation (using GPF_KERNEL) out of the ->init_lock lock, same way as
with zram_meta_alloc(), in order to prevent the same lockdep spew.
[1] https://lkml.org/lkml/2014/2/27/337
[2] https://lkml.org/lkml/2014/3/3/32
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Reported-by: Minchan Kim <minchan@kernel.org>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:18 +0000 (15:38 -0700)]
zram: add lz4 algorithm backend
Introduce LZ4 compression backend and make it available for selection.
LZ4 support is optional and requires user to set ZRAM_LZ4_COMPRESS config
option. The default compression backend is LZO.
TEST
(x86_64, core i5, 2 cores + 2 hyperthreading, zram disk size 1G,
ext4 file system, 3 compression streams)
iozone -t 3 -R -r 16K -s 60M -I +Z
Test LZO LZ4
----------------------------------------------
Initial write
1642744.62
1317005.09
Rewrite
2498980.88
1800645.16
Read
3957026.38
5877043.75
Re-read
3950997.38
5861847.00
Reverse Read
2937114.56
5047384.00
Stride read
2948163.19
4929587.38
Random read
3292692.69
4880793.62
Mixed workload
1545602.62
3502940.38
Random write
2448039.75
1758786.25
Pwrite
1670051.03
1338329.69
Pread
2530682.00
5097177.62
Fwrite
3232085.62
3275942.56
Fread
6306880.25
6645271.12
So on my system LZ4 is slower in write-only tests, while it performs
better in read-only and mixed (reads + writes) tests.
Official LZ4 benchmarks available here http://code.google.com/p/lz4/
(linux kernel uses revision r90).
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:17 +0000 (15:38 -0700)]
zram: make compression algorithm selection possible
Add and document `comp_algorithm' device attribute. This attribute allows
to show supported compression and currently selected compression
algorithms:
cat /sys/block/zram0/comp_algorithm
[lzo] lz4
and change selected compression algorithm:
echo lzo > /sys/block/zram0/comp_algorithm
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:15 +0000 (15:38 -0700)]
zram: add set_max_streams knob
This patch allows to change max_comp_streams on initialised zcomp.
Introduce zcomp set_max_streams() knob, zcomp_strm_multi_set_max_streams()
and zcomp_strm_single_set_max_streams() callbacks to change streams limit
for zcomp_strm_multi and zcomp_strm_single, accordingly. set_max_streams
for single steam zcomp does nothing.
If user has lowered the limit, then zcomp_strm_multi_set_max_streams()
attempts to immediately free extra streams (as much as it can, depending
on idle streams availability).
Note, this patch does not allow to change stream 'policy' from single to
multi stream (or vice versa) on already initialised compression backend.
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:14 +0000 (15:38 -0700)]
zram: add multi stream functionality
Existing zram (zcomp) implementation has only one compression stream
(buffer and algorithm private part), so in order to prevent data
corruption only one write (compress operation) can use this compression
stream, forcing all concurrent write operations to wait for stream lock
to be released. This patch changes zcomp to keep a compression streams
list of user-defined size (via sysfs device attr). Each write operation
still exclusively holds compression stream, the difference is that we
can have N write operations (depending on size of streams list)
executing in parallel. See TEST section later in commit message for
performance data.
Introduce struct zcomp_strm_multi and a set of functions to manage
zcomp_strm stream access. zcomp_strm_multi has a list of idle
zcomp_strm structs, spinlock to protect idle list and wait queue, making
it possible to perform parallel compressions.
The following set of functions added:
- zcomp_strm_multi_find()/zcomp_strm_multi_release()
find and release a compression stream, implement required locking
- zcomp_strm_multi_create()/zcomp_strm_multi_destroy()
create and destroy zcomp_strm_multi
zcomp ->strm_find() and ->strm_release() callbacks are set during
initialisation to zcomp_strm_multi_find()/zcomp_strm_multi_release()
correspondingly.
Each time zcomp issues a zcomp_strm_multi_find() call, the following set
of operations performed:
- spin lock strm_lock
- if idle list is not empty, remove zcomp_strm from idle list, spin
unlock and return zcomp stream pointer to caller
- if idle list is empty, current adds itself to wait queue. it will be
awaken by zcomp_strm_multi_release() caller.
zcomp_strm_multi_release():
- spin lock strm_lock
- add zcomp stream to idle list
- spin unlock, wake up sleeper
Minchan Kim reported that spinlock-based locking scheme has demonstrated
a severe perfomance regression for single compression stream case,
comparing to mutex-based (see https://lkml.org/lkml/2014/2/18/16)
base spinlock mutex
==Initial write ==Initial write ==Initial write
records: 5 records: 5 records: 5
avg:
1642424.35 avg: 699610.40 avg:
1655583.71
std: 39890.95(2.43%) std: 232014.19(33.16%) std: 52293.96
max:
1690170.94 max:
1163473.45 max:
1697164.75
min:
1568669.52 min: 573429.88 min:
1553410.23
==Rewrite ==Rewrite ==Rewrite
records: 5 records: 5 records: 5
avg:
1611775.39 avg: 501406.64 avg:
1684419.11
std: 17144.58(1.06%) std: 15354.41(3.06%) std: 18367.42
max:
1641800.95 max: 531356.78 max:
1706445.84
min:
1593515.27 min: 488817.78 min:
1655335.73
When only one compression stream available, mutex with spin on owner
tends to perform much better than frequent wait_event()/wake_up(). This
is why single stream implemented as a special case with mutex locking.
Introduce and document zram device attribute max_comp_streams. This
attr shows and stores current zcomp's max number of zcomp streams
(max_strm). Extend zcomp's zcomp_create() with `max_strm' parameter.
`max_strm' limits the number of zcomp_strm structs in compression
backend's idle list (max_comp_streams).
max_comp_streams used during initialisation as follows:
-- passing to zcomp_create() max_strm equals to 1 will initialise zcomp
using single compression stream zcomp_strm_single (mutex-based locking).
-- passing to zcomp_create() max_strm greater than 1 will initialise zcomp
using multi compression stream zcomp_strm_multi (spinlock-based locking).
default max_comp_streams value is 1, meaning that zram with single stream
will be initialised.
Later patch will introduce configuration knob to change max_comp_streams
on already initialised and used zcomp.
TEST
iozone -t 3 -R -r 16K -s 60M -I +Z
test base 1 strm (mutex) 3 strm (spinlock)
-----------------------------------------------------------------------
Initial write 589286.78 583518.39 718011.05
Rewrite 604837.97 596776.38
1515125.72
Random write 584120.11 595714.58
1388850.25
Pwrite 535731.17 541117.38 739295.27
Fwrite
1418083.88
1478612.72
1484927.06
Usage example:
set max_comp_streams to 4
echo 4 > /sys/block/zram0/max_comp_streams
show current max_comp_streams (default value is 1).
cat /sys/block/zram0/max_comp_streams
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:13 +0000 (15:38 -0700)]
zram: factor out single stream compression
This is preparation patch to add multi stream support to zcomp.
Introduce struct zcomp_strm_single and a set of functions to manage
zcomp_strm stream access. zcomp_strm_single implements single compession
stream, same way as current zcomp implementation. This moves zcomp_strm
stream control and locking from zcomp, so compressing backend zcomp is not
aware of required locking.
Single and multi streams require different locking schemes. Minchan Kim
reported that spinlock-based locking scheme (which is used in multi stream
implementation) has demonstrated a severe perfomance regression for single
compression stream case, comparing to mutex-based. see
https://lkml.org/lkml/2014/2/18/16
The following set of functions added:
- zcomp_strm_single_find()/zcomp_strm_single_release()
find and release a compression stream, implement required locking
- zcomp_strm_single_create()/zcomp_strm_single_destroy()
create and destroy zcomp_strm_single
New ->strm_find() and ->strm_release() callbacks added to zcomp, which are
set to zcomp_strm_single_find() and zcomp_strm_single_release() during
initialisation. Instead of direct locking and zcomp_strm access from
zcomp_strm_find() and zcomp_strm_release(), zcomp now calls ->strm_find()
and ->strm_release() correspondingly.
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:12 +0000 (15:38 -0700)]
zram: use zcomp compressing backends
Do not perform direct LZO compress/decompress calls, initialise
and use zcomp LZO backend (single compression stream) instead.
[akpm@linux-foundation.org: resolve conflicts with zram-delete-zram_init_device-fix.patch]
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:11 +0000 (15:38 -0700)]
zram: introduce compressing backend abstraction
ZRAM performs direct LZO compression algorithm calls, making it the one
and only option. While LZO is generally performs well, LZ4 algorithm
tends to have a faster decompression (see http://code.google.com/p/lz4/
for full report)
Name Ratio C.speed D.speed
MB/s MB/s
LZ4 (r101) 2.084 422 1820
LZO 2.06 2.106 414 600
Thus, users who have mostly read (decompress) usage scenarious or mixed
workflow (writes with relatively high read ops number) will benefit from
using LZ4 compression backend.
Introduce compressing backend abstraction zcomp in order to support
multiple compression algorithms with the following set of operations:
.create
.destroy
.compress
.decompress
Schematically zram write() usually contains the following steps:
0) preparation (decompression of partioal IO, etc.)
1) lock buffer_lock mutex (protects meta compress buffers)
2) compress (using meta compress buffers)
3) alloc and map zs_pool object
4) copy compressed data (from meta compress buffers) to object allocated by 3)
5) free previous pool page, assign a new one
6) unlock buffer_lock mutex
As we can see, compressing buffers must remain untouched from 1) to 4),
because, otherwise, concurrent write() can overwrite data. At the same
time, zram_meta must be aware of a) specific compression algorithm memory
requirements and b) necessary locking to protect compression buffers. To
remove requirement a) new struct zcomp_strm introduced, which contains a
compress/decompress `buffer' and compression algorithm `private' part.
While struct zcomp implements zcomp_strm stream handling and locking and
removes requirement b) from zram meta. zcomp ->create() and ->destroy(),
respectively, allocate and deallocate algorithm specific zcomp_strm
`private' part.
Every zcomp has zcomp stream and mutex to protect its compression stream.
Stream usage semantics remains the same -- only one write can hold stream
lock and use its buffers. zcomp_strm_find() turns caller into exclusive
user of a stream (holding stream mutex until zram release stream), and
zcomp_strm_release() makes zcomp stream available (unlock the stream
mutex). Hence no concurrent write (compression) operations possible at
the moment.
iozone -t 3 -R -r 16K -s 60M -I +Z
test base patched
--------------------------------------------------
Initial write 597992.91 591660.58
Rewrite 609674.34 616054.97
Read
2404771.75
2452909.12
Re-read
2459216.81
2470074.44
Reverse Read
1652769.66
1589128.66
Stride read
2202441.81
2202173.31
Random read
2236311.47
2276565.31
Mixed workload
1423760.41
1709760.06
Random write 579584.08 615933.86
Pwrite 597550.02 594933.70
Pread
1703672.53
1718126.72
Fwrite
1330497.06
1461054.00
Fread
3922851.00
3957242.62
Usage examples:
comp = zcomp_create(NAME) /* NAME e.g. "lzo" */
which initialises compressing backend if requested algorithm is supported.
Compress:
zstrm = zcomp_strm_find(comp)
zcomp_compress(comp, zstrm, src, &dst_len)
[..] /* copy compressed data */
zcomp_strm_release(comp, zstrm)
Decompress:
zcomp_decompress(comp, src, src_len, dst);
Free compessing backend and its zcomp stream:
zcomp_destroy(comp)
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:09 +0000 (15:38 -0700)]
zram: delete zram_init_device()
allocate new `zram_meta' in disksize_store() only for uninitialised zram
device, saving a number of allocations and deallocations in case if
disksize_store() was called on currently used device. at the same time
zram_meta stack variable is not necessary, because we can set ->meta
directly. there is also no need in setting QUEUE_FLAG_NONROT queue on
every disksize_store(), set it once during device creation.
[minchan@kernel.org: handle zram->meta alloc fail case]
[minchan@kernel.org: prevent lockdep spew of init_lock]
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:08 +0000 (15:38 -0700)]
zram: document failed_reads, failed_writes stats
Document `failed_reads' and `failed_writes' device attributes.
Remove info about `discard' - there is no such zram attr.
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:07 +0000 (15:38 -0700)]
zram: move zram size warning to documentation
Move zram warning about disksize and size of memory correlation to zram
documentation.
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:06 +0000 (15:38 -0700)]
zram: drop not used table `count' member
struct table `count' member is not used.
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Minchan Kim <minchan@kernel.org>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:05 +0000 (15:38 -0700)]
zram: report failed read and write stats
zram accounted but did not report numbers of failed read and write
queries. make these stats available as failed_reads and failed_writes
attrs.
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:04 +0000 (15:38 -0700)]
zram: remove zram stats code duplication
Introduce ZRAM_ATTR_RO macro that generates device_attribute and default
ATTR show() function for existing atomic64_t zram stats.
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:03 +0000 (15:38 -0700)]
zram: use atomic64_t for all zram stats
This is a preparation patch for stats code duplication removal.
1) use atomic64_t for `pages_zero' and `pages_stored' zram stats.
2) `compr_size' and `pages_zero' struct zram_stats members did not
follow the existing device attr naming scheme: zram_stats.ATTR has
ATTR_show() function. rename them:
-- compr_size -> compr_data_size
-- pages_zero -> zero_pages
Minchan Kim's note:
If we really have trouble with atomic stat operation, we could
change it with percpu_counter so that it could solve atomic overhead and
unnecessary memory space by introducing unsigned long instead of 64bit
atomic_t.
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:02 +0000 (15:38 -0700)]
zram: remove good and bad compress stats
Remove `good' and `bad' compressed sub-requests stats. RW request may
cause a number of RW sub-requests. zram used to account `good' compressed
sub-queries (with compressed size less than 50% of original size), `bad'
compressed sub-queries (with compressed size greater that 75% of original
size), leaving sub-requests with compression size between 50% and 75% of
original size not accounted and not reported. zram already accounts each
sub-request's compression size so we can calculate real device compression
ratio.
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:01 +0000 (15:38 -0700)]
zram: do not pass rw argument to __zram_make_request()
Do not pass rw argument down the __zram_make_request() -> zram_bvec_rw()
chain, decode it in zram_bvec_rw() instead. Besides, this is the place
where we distinguish READ and WRITE bio data directions, so account zram
RW stats here, instead of __zram_make_request(). This also allows to
account a real number of zram READ/WRITE operations, not just requests
(single RW request may cause a number of zram RW ops with separate
locking, compression/decompression, etc).
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Sergey Senozhatsky [Mon, 7 Apr 2014 22:38:00 +0000 (15:38 -0700)]
zram: drop `init_done' struct zram member
Introduce init_done() helper function which allows us to drop `init_done'
struct zram member. init_done() uses the fact that ->init_done == 1
equals to ->meta != NULL.
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
John Hubbard [Mon, 7 Apr 2014 22:37:59 +0000 (15:37 -0700)]
mm/page_alloc.c: change mm debug routines back to EXPORT_SYMBOL
A new dump_page() routine was recently added, and marked
EXPORT_SYMBOL_GPL. dump_page() was also added to the VM_BUG_ON_PAGE()
macro, and so the end result is that non-GPL code can no longer call
get_page() and a few other routines.
This only happens if the kernel was compiled with CONFIG_DEBUG_VM.
Change dump_page() to be EXPORT_SYMBOL.
Longer explanation:
Prior to commit
309381feaee5 ("mm: dump page when hitting a VM_BUG_ON
using VM_BUG_ON_PAGE") , it was possible to build MIT-licensed (non-GPL)
drivers on Fedora. Fedora is semi-unique, in that it sets
CONFIG_VM_DEBUG.
Because Fedora sets CONFIG_VM_DEBUG, they end up pulling in dump_page(),
via VM_BUG_ON_PAGE, via get_page(). As one of the authors of NVIDIA's
new, open source, "UVM-Lite" kernel module, I originally choose to use
the kernel's get_page() routine from within nvidia_uvm_page_cache.c,
because get_page() has always seemed to be very clearly intended for use
by non-GPL, driver code.
So I'm hoping that making get_page() widely accessible again will not be
too controversial. We did check with Fedora first, and they responded
(https://bugzilla.redhat.com/show_bug.cgi?id=
1074710#c3) that we should
try to get upstream changed, before asking Fedora to change. Their
reasoning seems beneficial to Linux: leaving CONFIG_DEBUG_VM set allows
Fedora to help catch mm bugs.
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Josh Boyer <jwboyer@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Srikar Dronamraju [Mon, 7 Apr 2014 22:37:57 +0000 (15:37 -0700)]
numa: use LAST_CPUPID_SHIFT to calculate LAST_CPUPID_MASK
LAST_CPUPID_MASK is calculated using LAST_CPUPID_WIDTH. However
LAST_CPUPID_WIDTH itself can be 0. (when LAST_CPUPID_NOT_IN_PAGE_FLAGS is
set). In such a case LAST_CPUPID_MASK turns out to be 0.
But with recent commit
1ae71d0319: (mm: numa: bugfix for
LAST_CPUPID_NOT_IN_PAGE_FLAGS) if LAST_CPUPID_MASK is 0,
page_cpupid_xchg_last() and page_cpupid_reset_last() causes
page->_last_cpupid to be set to 0.
This causes performance regression. Its almost as if numa_balancing is
off.
Fix LAST_CPUPID_MASK by using LAST_CPUPID_SHIFT instead of
LAST_CPUPID_WIDTH.
Some performance numbers and perf stats with and without the fix.
(3.14-rc6)
----------
numa01
Performance counter stats for '/usr/bin/time -f %e %S %U %c %w -o start_bench.out -a ./numa01':
12,27,462 cs [100.00%]
2,41,957 migrations [100.00%]
1,68,01,713 faults [100.00%]
7,99,35,29,041 cache-misses
98,808 migrate:mm_migrate_pages [100.00%]
1407.
690148814 seconds time elapsed
numa02
Performance counter stats for '/usr/bin/time -f %e %S %U %c %w -o start_bench.out -a ./numa02':
63,065 cs [100.00%]
14,364 migrations [100.00%]
2,08,118 faults [100.00%]
25,32,59,404 cache-misses
12 migrate:mm_migrate_pages [100.00%]
63.
840827219 seconds time elapsed
(3.14-rc6 with fix)
-------------------
numa01
Performance counter stats for '/usr/bin/time -f %e %S %U %c %w -o start_bench.out -a ./numa01':
9,68,911 cs [100.00%]
1,01,414 migrations [100.00%]
88,38,697 faults [100.00%]
4,42,92,51,042 cache-misses
4,25,060 migrate:mm_migrate_pages [100.00%]
685.
965331189 seconds time elapsed
numa02
Performance counter stats for '/usr/bin/time -f %e %S %U %c %w -o start_bench.out -a ./numa02':
17,543 cs [100.00%]
2,962 migrations [100.00%]
1,17,843 faults [100.00%]
11,80,61,644 cache-misses
12,358 migrate:mm_migrate_pages [100.00%]
20.
380132343 seconds time elapsed
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Zhang Yanfei [Mon, 7 Apr 2014 22:37:56 +0000 (15:37 -0700)]
madvise: correct the comment of MADV_DODUMP flag
s/MADV_NODUMP/MADV_DONTDUMP/
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fabian Frederick [Mon, 7 Apr 2014 22:37:55 +0000 (15:37 -0700)]
mm/readahead.c: inline ra_submit
Commit
f9acc8c7b35a ("readahead: sanify file_ra_state names") left
ra_submit with a single function call.
Move ra_submit to internal.h and inline it to save some stack. Thanks
to Andrew Morton for commenting different versions.
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mizuma, Masayoshi [Mon, 7 Apr 2014 22:37:54 +0000 (15:37 -0700)]
mm: hugetlb: fix softlockup when a large number of hugepages are freed.
When I decrease the value of nr_hugepage in procfs a lot, softlockup
happens. It is because there is no chance of context switch during this
process.
On the other hand, when I allocate a large number of hugepages, there is
some chance of context switch. Hence softlockup doesn't happen during
this process. So it's necessary to add the context switch in the
freeing process as same as allocating process to avoid softlockup.
When I freed 12 TB hugapages with kernel-2.6.32-358.el6, the freeing
process occupied a CPU over 150 seconds and following softlockup message
appeared twice or more.
$ echo
6000000 > /proc/sys/vm/nr_hugepages
$ cat /proc/sys/vm/nr_hugepages
6000000
$ grep ^Huge /proc/meminfo
HugePages_Total:
6000000
HugePages_Free:
6000000
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
$ echo 0 > /proc/sys/vm/nr_hugepages
BUG: soft lockup - CPU#16 stuck for 67s! [sh:12883] ...
Pid: 12883, comm: sh Not tainted 2.6.32-358.el6.x86_64 #1
Call Trace:
free_pool_huge_page+0xb8/0xd0
set_max_huge_pages+0x128/0x190
hugetlb_sysctl_handler_common+0x113/0x140
hugetlb_sysctl_handler+0x1e/0x20
proc_sys_call_handler+0x97/0xd0
proc_sys_write+0x14/0x20
vfs_write+0xb8/0x1a0
sys_write+0x51/0x90
__audit_syscall_exit+0x265/0x290
system_call_fastpath+0x16/0x1b
I have not confirmed this problem with upstream kernels because I am not
able to prepare the machine equipped with 12TB memory now. However I
confirmed that the amount of decreasing hugepages was directly
proportional to the amount of required time.
I measured required times on a smaller machine. It showed 130-145
hugepages decreased in a millisecond.
Amount of decreasing Required time Decreasing rate
hugepages (msec) (pages/msec)
------------------------------------------------------------
10,000 pages == 20GB 70 - 74 135-142
30,000 pages == 60GB 208 - 229 131-144
It means decrement of 6TB hugepages will trigger softlockup with the
default threshold 20sec, in this decreasing rate.
Signed-off-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fabian Frederick [Mon, 7 Apr 2014 22:37:53 +0000 (15:37 -0700)]
mm/memblock.c: use PFN_PHYS()
Replace ((phys_addr_t)(x) << PAGE_SHIFT) by pfn macro.
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Emil Medve [Mon, 7 Apr 2014 22:37:52 +0000 (15:37 -0700)]
memblock: use for_each_memblock()
This is a small cleanup.
Signed-off-by: Emil Medve <Emilian.Medve@Freescale.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Miklos Szeredi [Mon, 7 Apr 2014 22:37:51 +0000 (15:37 -0700)]
mm: remove unused arg of set_page_dirty_balance()
There's only one caller of set_page_dirty_balance() and that will call it
with page_mkwrite == 0.
The page_mkwrite argument was unused since commit
b827e496c893 "mm: close
page_mkwrite races".
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vlastimil Babka [Mon, 7 Apr 2014 22:37:50 +0000 (15:37 -0700)]
mm: try_to_unmap_cluster() should lock_page() before mlocking
A BUG_ON(!PageLocked) was triggered in mlock_vma_page() by Sasha Levin
fuzzing with trinity. The call site try_to_unmap_cluster() does not lock
the pages other than its check_page parameter (which is already locked).
The BUG_ON in mlock_vma_page() is not documented and its purpose is
somewhat unclear, but apparently it serializes against page migration,
which could otherwise fail to transfer the PG_mlocked flag. This would
not be fatal, as the page would be eventually encountered again, but
NR_MLOCK accounting would become distorted nevertheless. This patch adds
a comment to the BUG_ON in mlock_vma_page() and munlock_vma_page() to that
effect.
The call site try_to_unmap_cluster() is fixed so that for page !=
check_page, trylock_page() is attempted (to avoid possible deadlocks as we
already have check_page locked) and mlock_vma_page() is performed only
upon success. If the page lock cannot be obtained, the page is left
without PG_mlocked, which is again not a problem in the whole unevictable
memory design.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Bob Liu <bob.liu@oracle.com>
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Johannes Weiner [Mon, 7 Apr 2014 22:37:48 +0000 (15:37 -0700)]
mm: page_alloc: spill to remote nodes before waking kswapd
On NUMA systems, a node may start thrashing cache or even swap anonymous
pages while there are still free pages on remote nodes.
This is a result of commits
81c0a2bb515f ("mm: page_alloc: fair zone
allocator policy") and
fff4068cba48 ("mm: page_alloc: revert NUMA aspect
of fair allocation policy").
Before those changes, the allocator would first try all allowed zones,
including those on remote nodes, before waking any kswapds. But now,
the allocator fastpath doubles as the fairness pass, which in turn can
only consider the local node to prevent remote spilling based on
exhausted fairness batches alone. Remote nodes are only considered in
the slowpath, after the kswapds are woken up. But if remote nodes still
have free memory, kswapd should not be woken to rebalance the local node
or it may thrash cash or swap prematurely.
Fix this by adding one more unfair pass over the zonelist that is
allowed to spill to remote nodes after the local fairness pass fails but
before entering the slowpath and waking the kswapds.
This also gets rid of the GFP_THISNODE exemption from the fairness
protocol because the unfair pass is no longer tied to kswapd, which
GFP_THISNODE is not allowed to wake up.
However, because remote spills can be more frequent now - we prefer them
over local kswapd reclaim - the allocation batches on remote nodes could
underflow more heavily. When resetting the batches, use
atomic_long_read() directly instead of zone_page_state() to calculate the
delta as the latter filters negative counter values.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: <stable@kernel.org> [3.12+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Michal Hocko [Mon, 7 Apr 2014 22:37:46 +0000 (15:37 -0700)]
memcg: rename high level charging functions
mem_cgroup_newpage_charge is used only for charging anonymous memory so
it is better to rename it to mem_cgroup_charge_anon.
mem_cgroup_cache_charge is used for file backed memory so rename it to
mem_cgroup_charge_file.
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Johannes Weiner [Mon, 7 Apr 2014 22:37:45 +0000 (15:37 -0700)]
memcg: sanitize __mem_cgroup_try_charge() call protocol
Some callsites pass a memcg directly, some callsites pass an mm that
then has to be translated to a memcg. This makes for a terrible
function interface.
Just push the mm-to-memcg translation into the respective callsites and
always pass a memcg to mem_cgroup_try_charge().
[mhocko@suse.cz: add charge mm helper]
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Michal Hocko [Mon, 7 Apr 2014 22:37:44 +0000 (15:37 -0700)]
memcg: do not replicate get_mem_cgroup_from_mm in __mem_cgroup_try_charge
__mem_cgroup_try_charge duplicates get_mem_cgroup_from_mm for charges
which came without a memcg. The only reason seems to be a tiny
optimization when css_tryget is not called if the charge can be consumed
from the stock. Nevertheless css_tryget is very cheap since it has been
reworked to use per-cpu counting so this optimization doesn't give us
anything these days.
So let's drop the code duplication so that the code is more readable.
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Johannes Weiner [Mon, 7 Apr 2014 22:37:43 +0000 (15:37 -0700)]
memcg: get_mem_cgroup_from_mm()
Instead of returning NULL from try_get_mem_cgroup_from_mm() when the mm
owner is exiting, just return root_mem_cgroup. This makes sense for all
callsites and gets rid of some of them having to fallback manually.
[fengguang.wu@intel.com: fix warnings]
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Johannes Weiner [Mon, 7 Apr 2014 22:37:42 +0000 (15:37 -0700)]
memcg: remove unnecessary !mm check from try_get_mem_cgroup_from_mm()
Users pass either a mm that has been established under task lock, or use
a verified current->mm, which means the task can't be exiting.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Johannes Weiner [Mon, 7 Apr 2014 22:37:41 +0000 (15:37 -0700)]
mm: memcg: push !mm handling out to page cache charge function
Only page cache charges can happen without an mm context, so push this
special case out of the inner core and into the cache charge function.
An ancient comment explains that the mm can also be NULL in case the
task is currently being migrated, but that is not actually true with the
current case, so just remove it.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Johannes Weiner [Mon, 7 Apr 2014 22:37:41 +0000 (15:37 -0700)]
mm: memcg: inline mem_cgroup_charge_common()
mem_cgroup_charge_common() is used by both cache and anon pages, but
most of its body only applies to anon pages and the remainder is not
worth having in a separate function.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Johannes Weiner [Mon, 7 Apr 2014 22:37:40 +0000 (15:37 -0700)]
mm: memcg: remove mem_cgroup_move_account_page_stat()
It used to disable preemption and run sanity checks but now it's only
taking a number out of one percpu counter and putting it into another.
Do this directly in the callsite and save the indirection.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Johannes Weiner [Mon, 7 Apr 2014 22:37:39 +0000 (15:37 -0700)]
mm: memcg: remove unnecessary preemption disabling
lock_page_cgroup() disables preemption, remove explicit preemption
disabling for code paths holding this lock.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Kirill A. Shutemov [Mon, 7 Apr 2014 22:37:38 +0000 (15:37 -0700)]
mm: use 'const char *' insted of 'char *' for reason in dump_page()
I tried to use 'dump_page(page, __func__)' for debugging, but it triggers
warning:
warning: passing argument 2 of `dump_page' discards `const' qualifier from pointer target type [enabled by default]
Let's convert 'reason' to 'const char *' in dump_page() and friends: we
shouldn't modify it anyway.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Gioh Kim [Mon, 7 Apr 2014 22:37:37 +0000 (15:37 -0700)]
mm/vmalloc.c: enhance vm_map_ram() comment
vm_map_ram() has a fragmentation problem when it cannot purge a
chunk(ie, 4M address space) if there is a pinning object in that
addresss space. So it could consume all VMALLOC address space easily.
We can fix the fragmentation problem by using vmap instead of
vm_map_ram() but vmap() is known to be slow compared to vm_map_ram().
Minchan said vm_map_ram is 5 times faster than vmap in his tests. So I
thought we should fix fragment problem of vm_map_ram because our
proprietary GPU driver has used it heavily.
On second thought, it's not an easy because we should reuse freed space
for solving the problem and it could make more IPI and bitmap operation
for searching hole. It could mitigate API's goal which is very fast
mapping. And even fragmentation problem wouldn't show in 64 bit
machine.
Another option is that the user should separate long-life and short-life
object and use vmap for long-life but vm_map_ram for short-life. If we
inform the user about the characteristic of vm_map_ram the user can
choose one according to the page lifetime.
Let's add some notice messages to user.
[akpm@linux-foundation.org: tweak comment text]
Signed-off-by: Gioh Kim <gioh.kim@lge.com>
Reviewed-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Choi Gi-yong [Mon, 7 Apr 2014 22:37:36 +0000 (15:37 -0700)]
mm: fix 'ERROR: do not initialise globals to 0 or NULL' and coding style
Signed-off-by: Choi Gi-yong <yong@gnoy.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mikulas Patocka [Mon, 7 Apr 2014 22:37:35 +0000 (15:37 -0700)]
mempool: add unlikely and likely hints
Add unlikely and likely hints to the function mempool_free. It lays out
the code in such a way that the common path is executed straighforward and
saves a cache line.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
David Rientjes [Mon, 7 Apr 2014 22:37:34 +0000 (15:37 -0700)]
mm, compaction: determine isolation mode only once
The conditions that control the isolation mode in
isolate_migratepages_range() do not change during the iteration, so
extract them out and only define the value once.
This actually does have an effect, gcc doesn't optimize it itself because
of cc->sync.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
David Rientjes [Mon, 7 Apr 2014 22:37:32 +0000 (15:37 -0700)]
res_counter: remove interface for locked charging and uncharging
The res_counter_{charge,uncharge}_locked() variants are not used in the
kernel outside of the resource counter code itself, so remove the
interface.
Signed-off-by: David Rientjes <rientjes@google.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Tim Hockin <thockin@google.com>
Cc: Christoph Lameter <cl@linux.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
David Rientjes [Mon, 7 Apr 2014 22:37:30 +0000 (15:37 -0700)]
mm, mempolicy: remove per-process flag
PF_MEMPOLICY is an unnecessary optimization for CONFIG_SLAB users.
There's no significant performance degradation to checking
current->mempolicy rather than current->flags & PF_MEMPOLICY in the
allocation path, especially since this is considered unlikely().
Running TCP_RR with netperf-2.4.5 through localhost on 16 cpu machine with
64GB of memory and without a mempolicy:
threads before after
16
1249409 1244487
32
1281786 1246783
48
1239175 1239138
64
1244642 1241841
80
1244346 1248918
96
1266436 1254316
112
1307398 1312135
128
1327607 1326502
Per-process flags are a scarce resource so we should free them up whenever
possible and make them available. We'll be using it shortly for memcg oom
reserves.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Tim Hockin <thockin@google.com>
Cc: Christoph Lameter <cl@linux.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
David Rientjes [Mon, 7 Apr 2014 22:37:29 +0000 (15:37 -0700)]
mm, mempolicy: rename slab_node for clarity
slab_node() is actually a mempolicy function, so rename it to
mempolicy_slab_node() to make it clearer that it used for processes with
mempolicies.
At the same time, cleanup its code by saving numa_mem_id() in a local
variable (since we require a node with memory, not just any node) and
remove an obsolete comment that assumes the mempolicy is actually passed
into the function.
Signed-off-by: David Rientjes <rientjes@google.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Tim Hockin <thockin@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
David Rientjes [Mon, 7 Apr 2014 22:37:27 +0000 (15:37 -0700)]
fork: collapse copy_flags into copy_process
copy_flags() does not use the clone_flags formal and can be collapsed
into copy_process() for cleaner code.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Tim Hockin <thockin@google.com>
Cc: Christoph Lameter <cl@linux.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Gideon Israel Dsouza [Mon, 7 Apr 2014 22:37:26 +0000 (15:37 -0700)]
mm: use macros from compiler.h instead of __attribute__((...))
To increase compiler portability there is <linux/compiler.h> which
provides convenience macros for various gcc constructs. Eg: __weak for
__attribute__((weak)). I've replaced all instances of gcc attributes with
the right macro in the memory management (/mm) subsystem.
[akpm@linux-foundation.org: while-we're-there consistency tweaks]
Signed-off-by: Gideon Israel Dsouza <gidisrael@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Davidlohr Bueso [Mon, 7 Apr 2014 22:37:25 +0000 (15:37 -0700)]
mm: per-thread vma caching
This patch is a continuation of efforts trying to optimize find_vma(),
avoiding potentially expensive rbtree walks to locate a vma upon faults.
The original approach (https://lkml.org/lkml/2013/11/1/410), where the
largest vma was also cached, ended up being too specific and random,
thus further comparison with other approaches were needed. There are
two things to consider when dealing with this, the cache hit rate and
the latency of find_vma(). Improving the hit-rate does not necessarily
translate in finding the vma any faster, as the overhead of any fancy
caching schemes can be too high to consider.
We currently cache the last used vma for the whole address space, which
provides a nice optimization, reducing the total cycles in find_vma() by
up to 250%, for workloads with good locality. On the other hand, this
simple scheme is pretty much useless for workloads with poor locality.
Analyzing ebizzy runs shows that, no matter how many threads are
running, the mmap_cache hit rate is less than 2%, and in many situations
below 1%.
The proposed approach is to replace this scheme with a small per-thread
cache, maximizing hit rates at a very low maintenance cost.
Invalidations are performed by simply bumping up a 32-bit sequence
number. The only expensive operation is in the rare case of a seq
number overflow, where all caches that share the same address space are
flushed. Upon a miss, the proposed replacement policy is based on the
page number that contains the virtual address in question. Concretely,
the following results are seen on an 80 core, 8 socket x86-64 box:
1) System bootup: Most programs are single threaded, so the per-thread
scheme does improve ~50% hit rate by just adding a few more slots to
the cache.
+----------------+----------+------------------+
| caching scheme | hit-rate | cycles (billion) |
+----------------+----------+------------------+
| baseline | 50.61% | 19.90 |
| patched | 73.45% | 13.58 |
+----------------+----------+------------------+
2) Kernel build: This one is already pretty good with the current
approach as we're dealing with good locality.
+----------------+----------+------------------+
| caching scheme | hit-rate | cycles (billion) |
+----------------+----------+------------------+
| baseline | 75.28% | 11.03 |
| patched | 88.09% | 9.31 |
+----------------+----------+------------------+
3) Oracle 11g Data Mining (4k pages): Similar to the kernel build workload.
+----------------+----------+------------------+
| caching scheme | hit-rate | cycles (billion) |
+----------------+----------+------------------+
| baseline | 70.66% | 17.14 |
| patched | 91.15% | 12.57 |
+----------------+----------+------------------+
4) Ebizzy: There's a fair amount of variation from run to run, but this
approach always shows nearly perfect hit rates, while baseline is just
about non-existent. The amounts of cycles can fluctuate between
anywhere from ~60 to ~116 for the baseline scheme, but this approach
reduces it considerably. For instance, with 80 threads:
+----------------+----------+------------------+
| caching scheme | hit-rate | cycles (billion) |
+----------------+----------+------------------+
| baseline | 1.06% | 91.54 |
| patched | 99.97% | 14.18 |
+----------------+----------+------------------+
[akpm@linux-foundation.org: fix nommu build, per Davidlohr]
[akpm@linux-foundation.org: document vmacache_valid() logic]
[akpm@linux-foundation.org: attempt to untangle header files]
[akpm@linux-foundation.org: add vmacache_find() BUG_ON]
[hughd@google.com: add vmacache_valid_mm() (from Oleg)]
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: adjust and enhance comments]
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Reviewed-by: Michel Lespinasse <walken@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Tested-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Ning Qu [Mon, 7 Apr 2014 22:37:24 +0000 (15:37 -0700)]
mm: implement ->map_pages for shmem/tmpfs
In shmem/tmpfs, we also use the generic filemap_map_pages, seems the
additional checking is not worth a separate version of map_pages for it.
Signed-off-by: Ning Qu <quning@google.com>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Dave Chinner <david@fromorbit.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Kirill A. Shutemov [Mon, 7 Apr 2014 22:37:22 +0000 (15:37 -0700)]
mm: add debugfs tunable for fault_around_order
Let's allow people to tweak faultaround at runtime.
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Ning Qu <quning@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Kirill A. Shutemov [Mon, 7 Apr 2014 22:37:21 +0000 (15:37 -0700)]
mm: cleanup size checks in filemap_fault() and filemap_map_pages()
Minor cleanups:
- 'size' variable is now in bytes, not pages;
- use round_up(): it should be easier to read.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Ning Qu <quning@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Kirill A. Shutemov [Mon, 7 Apr 2014 22:37:19 +0000 (15:37 -0700)]
mm: implement ->map_pages for page cache
filemap_map_pages() is generic implementation of ->map_pages() for
filesystems who uses page cache.
It should be safe to use filemap_map_pages() for ->map_pages() if
filesystem use filemap_fault() for ->fault().
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Ning Qu <quning@gmail.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Kirill A. Shutemov [Mon, 7 Apr 2014 22:37:18 +0000 (15:37 -0700)]
mm: introduce vm_ops->map_pages()
Here's new version of faultaround patchset. It took a while to tune it
and collect performance data.
First patch adds new callback ->map_pages to vm_operations_struct.
->map_pages() is called when VM asks to map easy accessible pages.
Filesystem should find and map pages associated with offsets from
"pgoff" till "max_pgoff". ->map_pages() is called with page table
locked and must not block. If it's not possible to reach a page without
blocking, filesystem should skip it. Filesystem should use do_set_pte()
to setup page table entry. Pointer to entry associated with offset
"pgoff" is passed in "pte" field in vm_fault structure. Pointers to
entries for other offsets should be calculated relative to "pte".
Currently VM use ->map_pages only on read page fault path. We try to
map FAULT_AROUND_PAGES a time. FAULT_AROUND_PAGES is 16 for now.
Performance data for different FAULT_AROUND_ORDER is below.
TODO:
- implement ->map_pages() for shmem/tmpfs;
- modify get_user_pages() to be able to use ->map_pages() and implement
mmap(MAP_POPULATE|MAP_NONBLOCK) on top.
=========================================================================
Tested on 4-socket machine (120 threads) with 128GiB of RAM.
Few real-world workloads. The sweet spot for FAULT_AROUND_ORDER here is
somewhere between 3 and 5. Let's say 4 :)
Linux build (make -j60)
FAULT_AROUND_ORDER Baseline 1 3 4 5 7 9
minor-faults 283,301,572 247,151,987 212,215,789 204,772,882 199,568,944 194,703,779 193,381,485
time, seconds 151.
227629483 153.
920996480 151.
356125472 150.
863792049 150.
879207877 151.
150764954 151.
450962358
Linux rebuild (make -j60)
FAULT_AROUND_ORDER Baseline 1 3 4 5 7 9
minor-faults 5,396,854 4,148,444 2,855,286 2,577,282 2,361,957 2,169,573 2,112,643
time, seconds 27.
404543757 27.
559725591 27.
030057426 26.
855045126 26.
678618635 26.
974523490 26.
761320095
Git test suite (make -j60 test)
FAULT_AROUND_ORDER Baseline 1 3 4 5 7 9
minor-faults 129,591,823 99,200,751 66,106,718 57,606,410 51,510,808 45,776,813 44,085,515
time, seconds 66.
087215026 64.
784546905 64.
401156567 65.
282708668 66.
034016829 66.
793780811 67.
237810413
Two synthetic tests: access every word in file in sequential/random order.
It doesn't improve much after FAULT_AROUND_ORDER == 4.
Sequential access 16GiB file
FAULT_AROUND_ORDER Baseline 1 3 4 5 7 9
1 thread
minor-faults 4,195,437 2,098,275 525,068 262,251 131,170 32,856 8,282
time, seconds 7.
250461742 6.
461711074 5.
493859139 5.
488488147 5.
707213983 5.
898510832 5.
109232856
8 threads
minor-faults 33,557,540 16,892,728 4,515,848 2,366,999 1,423,382 442,732 142,339
time, seconds 16.
649304881 9.
312555263 6.
612490639 6.
394316732 6.
669827501 6.
75078944 6.
371900528
32 threads
minor-faults 134,228,222 67,526,810 17,725,386 9,716,537 4,763,731 1,668,921 537,200
time, seconds 49.
164430543 29.
712060103 12.
938649729 10.
175151004 11.
840094583 9.
594081325 9.
928461797
60 threads
minor-faults 251,687,988 126,146,952 32,919,406 18,208,804 10,458,947 2,733,907 928,217
time, seconds 86.
260656897 49.
626551828 22.
335007632 17.
608243696 16.
523119035 16.
339489186 16.
326390902
120 threads
minor-faults 503,352,863 252,939,677 67,039,168 35,191,827 19,170,091 4,688,357 1,471,862
time, seconds 124.
589206333 79.
757867787 39.
508707872 32.
167281632 29.
972989292 28.
729834575 28.
042251622
Random access 1GiB file
1 thread
minor-faults 262,636 132,743 34,369 17,299 8,527 3,451 1,222
time, seconds 15.
351890914 16.
613802482 16.
569227308 15.
179220992 16.
557356122 16.
578247824 15.
365266994
8 threads
minor-faults 2,098,948 1,061,871 273,690 154,501 87,110 25,663 7,384
time, seconds 15.
040026343 15.
096933500 14.
474757288 14.
289129964 14.
411537468 14.
296316837 14.
395635804
32 threads
minor-faults 8,390,734 4,231,023 1,054,432 528,847 269,242 97,746 26,881
time, seconds 20.
430433109 21.
585235358 22.
115062928 14.
872878951 14.
880856305 14.
883370649 14.
821261690
60 threads
minor-faults 15,733,258 7,892,809 1,973,393 988,266 594,789 164,994 51,691
time, seconds 26.
577302548 25.
692397770 18.
728863715 20.
153026398 21.
619101933 17.
745086260 17.
613215273
120 threads
minor-faults 31,471,111 15,816,616 3,959,209 1,978,685 1,008,299 264,635 96,010
time, seconds 41.
835322703 40.
459786095 36.
085306105 35.
313894834 35.
814445675 36.
552633793 34.
289210594
Touch only one page in page table in 16GiB file
FAULT_AROUND_ORDER Baseline 1 3 4 5 7 9
1 thread
minor-faults 8,372 8,324 8,270 8,260 8,249 8,239 8,237
time, seconds 0.
039892712 0.
045369149 0.
051846126 0.
063681685 0.
079095975 0.
17652406 0.
541213386
8 threads
minor-faults 65,731 65,681 65,628 65,620 65,608 65,599 65,596
time, seconds 0.
124159196 0.
488600638 0.
156854426 0.
191901957 0.
242631486 0.
543569456 1.
677303984
32 threads
minor-faults 262,388 262,341 262,285 262,276 262,266 262,257 263,183
time, seconds 0.
452421421 0.
488600638 0.
565020946 0.
648229739 0.
789850823 1.
651584361 5.
000361559
60 threads
minor-faults 491,822 491,792 491,723 491,711 491,701 491,691 491,825
time, seconds 0.
763288616 0.
869620515 0.
980727360 1.
161732354 1.
466915814 3.
04041448 9.
308612938
120 threads
minor-faults 983,466 983,655 983,366 983,372 983,363 984,083 984,164
time, seconds 1.
595846553 1.
667902182 2.
008959376 2.
425380942 2.
941368804 5.
977807890 18.
401846125
This patch (of 2):
Introduce new vm_ops callback ->map_pages() and uses it for mapping easy
accessible pages around fault address.
On read page fault, if filesystem provides ->map_pages(), we try to map up
to FAULT_AROUND_PAGES pages around page fault address in hope to reduce
number of minor page faults.
We call ->map_pages first and use ->fault() as fallback if page by the
offset is not ready to be mapped (cold page cache or something).
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Ning Qu <quning@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andrew Morton [Mon, 7 Apr 2014 22:37:16 +0000 (15:37 -0700)]
drivers/lguest/page_tables.c: rename do_set_pte()
"mm: introduce vm_ops->map_pages()" wants to export a do_set_pte() from core
kernel. Rename lguest's do_set_pte() to something more lguest-specific.
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Konstantin Khlebnikov [Mon, 7 Apr 2014 22:37:15 +0000 (15:37 -0700)]
tools/vm/page-types.c: page-cache sniffing feature
After this patch 'page-types' can walk over a file's mappings and
analyze populated page cache pages mostly without disturbing its state.
It maps chunk of file, marks VMA as MADV_RANDOM to turn off readahead,
pokes VMA via mincore() to determine cached pages, triggers page-fault
only for them, and finally gathers information via pagemap/kpageflags.
Before unmap it marks VMA as MADV_SEQUENTIAL for ignoring reference
bits.
usage: page-types -f <path>
If <path> is directory it will analyse all files in all subdirectories.
Symlinks are not followed as well as mount points. Hardlinks aren't
handled, they'll be dumped as many times as they are found. Recursive
walk brings all dentries into dcache and populates page cache of
block-devices aka 'Buffers'.
Probably it's worth to add ioctl for dumping file page cache as array of
PFNs as a replacement for this hackish juggling with
mmap/madvise/mincore/pagemap. Also recursive walk could be replaced
with dumping cached inodes via some ioctl or debugfs interface followed
by openning them via open_by_handle_at, this would fix hardlinks
handling and unneeded population of dcache and buffers. This interface
might be used as data source for constructing readahead plans and for
background optimizations of actively used files.
collateral changes:
+ fix 64-bit LFS: define _FILE_OFFSET_BITS instead of _LARGEFILE64_SOURCE
+ replace lseek + read with single pread
+ make show_page_range() reusable after flush
usage example:
~/src/linux/tools/vm$ sudo ./page-types -L -f page-types
foffset offset flags
page-types Inode:
2229277 Size: 89065 (22 pages)
Modify: Tue Feb 25 12:00:59 2014 (162 seconds ago)
Access: Tue Feb 25 12:01:00 2014 (161 seconds ago)
0 3cbf3b __RU_lA____M________________________
1 38946a __RU_lA____M________________________
2 1a3cec __RU_lA____M________________________
3 1a8321 __RU_lA____M________________________
4 3af7cc __RU_lA____M________________________
5 1ed532 __RU_lA_____________________________
6 2e436a __RU_lA_____________________________
7 29a35e ___U_lA_____________________________
8 2de86e ___U_lA_____________________________
9 3bdfb4 ___U_lA_____________________________
10 3cd8a3 ___U_lA_____________________________
11 2afa50 ___U_lA_____________________________
12 2534c2 ___U_lA_____________________________
13 1b7a40 ___U_lA_____________________________
14 17b0be ___U_lA_____________________________
15 392b0c ___U_lA_____________________________
16 3ba46a __RU_lA_____________________________
17 397dc8 ___U_lA_____________________________
18 1f2a36 ___U_lA_____________________________
19 21fd30 __RU_lA_____________________________
20 2c35ba __RU_l______________________________
21 20f181 __RU_l______________________________
flags page-count MB symbolic-flags long-symbolic-flags
0x000000000000002c 2 0 __RU_l______________________________ referenced,uptodate,lru
0x0000000000000068 11 0 ___U_lA_____________________________ uptodate,lru,active
0x000000000000006c 4 0 __RU_lA_____________________________ referenced,uptodate,lru,active
0x000000000000086c 5 0 __RU_lA____M________________________ referenced,uptodate,lru,active,mmap
total 22 0
~/src/linux/tools/vm$ sudo ./page-types -f /
flags page-count MB symbolic-flags long-symbolic-flags
0x0000000000000028 21761 85 ___U_l______________________________ uptodate,lru
0x000000000000002c 127279 497 __RU_l______________________________ referenced,uptodate,lru
0x0000000000000068 74160 289 ___U_lA_____________________________ uptodate,lru,active
0x000000000000006c 84469 329 __RU_lA_____________________________ referenced,uptodate,lru,active
0x000000000000007c 1 0 __RUDlA_____________________________ referenced,uptodate,dirty,lru,active
0x0000000000000228 370 1 ___U_l___I__________________________ uptodate,lru,reclaim
0x0000000000000828 49 0 ___U_l_____M________________________ uptodate,lru,mmap
0x000000000000082c 126 0 __RU_l_____M________________________ referenced,uptodate,lru,mmap
0x0000000000000868 137 0 ___U_lA____M________________________ uptodate,lru,active,mmap
0x000000000000086c 12890 50 __RU_lA____M________________________ referenced,uptodate,lru,active,mmap
total 321242 1254
Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Kirill A. Shutemov [Mon, 7 Apr 2014 22:37:14 +0000 (15:37 -0700)]
mm: disable split page table lock for !MMU
There's no reason to enable split page table lock if don't have page
tables.
It also triggers build error at least on ARM since we don't define
pmd_page() for !MMU.
In file included from arch/arm/kernel/asm-offsets.c:14:0:
include/linux/mm.h: In function 'pte_lockptr':
include/linux/mm.h:1392:2: error: implicit declaration of function 'pmd_page' [-Werror=implicit-function-declaration]
include/linux/mm.h:1392:2: warning: passing argument 1 of 'ptlock_ptr' makes pointer from integer without a cast [enabled by default]
include/linux/mm.h:1384:27: note: expected 'struct page *' but argument is of type 'int'
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Alex Thorlton [Mon, 7 Apr 2014 22:37:12 +0000 (15:37 -0700)]
exec: kill the unnecessary mm->def_flags setting in load_elf_binary()
load_elf_binary() sets current->mm->def_flags = def_flags and def_flags
is always zero. Not only this looks strange, this is unnecessary
because mm_init() has already set ->def_flags = 0.
Signed-off-by: Alex Thorlton <athorlton@sgi.com>
Suggested-by: Oleg Nesterov <oleg@redhat.com>
Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Mel Gorman <mgorman@suse.de>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Alex Thorlton [Mon, 7 Apr 2014 22:37:10 +0000 (15:37 -0700)]
mm, thp: add VM_INIT_DEF_MASK and PRCTL_THP_DISABLE
Add VM_INIT_DEF_MASK, to allow us to set the default flags for VMs. It
also adds a prctl control which allows us to set the THP disable bit in
mm->def_flags so that VMs will pick up the setting as they are created.
Signed-off-by: Alex Thorlton <athorlton@sgi.com>
Suggested-by: Oleg Nesterov <oleg@redhat.com>
Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Mel Gorman <mgorman@suse.de>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Alex Thorlton [Mon, 7 Apr 2014 22:37:09 +0000 (15:37 -0700)]
mm: revert "thp: make MADV_HUGEPAGE check for mm->def_flags"
The main motivation behind this patch is to provide a way to disable THP
for jobs where the code cannot be modified, and using a malloc hook with
madvise is not an option (i.e. statically allocated data). This patch
allows us to do just that, without affecting other jobs running on the
system.
We need to do this sort of thing for jobs where THP hurts performance,
due to the possibility of increased remote memory accesses that can be
created by situations such as the following:
When you touch 1 byte of an untouched, contiguous 2MB chunk, a THP will
be handed out, and the THP will be stuck on whatever node the chunk was
originally referenced from. If many remote nodes need to do work on
that same chunk, they'll be making remote accesses.
With THP disabled, 4K pages can be handed out to separate nodes as
they're needed, greatly reducing the amount of remote accesses to
memory.
This patch is based on some of my work combined with some
suggestions/patches given by Oleg Nesterov. The main goal here is to
add a prctl switch to allow us to disable to THP on a per mm_struct
basis.
Here's a bit of test data with the new patch in place...
First with the flag unset:
# perf stat -a ./prctl_wrapper_mmv3 0 ./thp_pthread -C 0 -m 0 -c 512 -b 256g
Setting thp_disabled for this task...
thp_disable: 0
Set thp_disabled state to 0
Process pid = 18027
PF/
MAX MIN TOTCPU/ TOT_PF/ TOT_PF/ WSEC/
TYPE: CPUS WALL WALL SYS USER TOTCPU CPU WALL_SEC SYS_SEC CPU NODES
512 1.120 0.060 0.000 0.110 0.110 0.000
28571428864 -
9223372036854775808 55803572 23
Performance counter stats for './prctl_wrapper_mmv3_hack 0 ./thp_pthread -C 0 -m 0 -c 512 -b 256g':
273719072.841402 task-clock # 641.026 CPUs utilized [100.00%]
1,008,986 context-switches # 0.000 M/sec [100.00%]
7,717 CPU-migrations # 0.000 M/sec [100.00%]
1,698,932 page-faults # 0.000 M/sec
355,222,544,890,379 cycles # 1.298 GHz [100.00%]
536,445,412,234,588 stalled-cycles-frontend # 151.02% frontend cycles idle [100.00%]
409,110,531,310,223 stalled-cycles-backend # 115.17% backend cycles idle [100.00%]
148,286,797,266,411 instructions # 0.42 insns per cycle
# 3.62 stalled cycles per insn [100.00%]
27,061,793,159,503 branches # 98.867 M/sec [100.00%]
1,188,655,196 branch-misses # 0.00% of all branches
427.
001706337 seconds time elapsed
Now with the flag set:
# perf stat -a ./prctl_wrapper_mmv3 1 ./thp_pthread -C 0 -m 0 -c 512 -b 256g
Setting thp_disabled for this task...
thp_disable: 1
Set thp_disabled state to 1
Process pid = 144957
PF/
MAX MIN TOTCPU/ TOT_PF/ TOT_PF/ WSEC/
TYPE: CPUS WALL WALL SYS USER TOTCPU CPU WALL_SEC SYS_SEC CPU NODES
512 0.620 0.260 0.250 0.320 0.570 0.001
51612901376 128000000000 100806448 23
Performance counter stats for './prctl_wrapper_mmv3_hack 1 ./thp_pthread -C 0 -m 0 -c 512 -b 256g':
138789390.540183 task-clock # 641.959 CPUs utilized [100.00%]
534,205 context-switches # 0.000 M/sec [100.00%]
4,595 CPU-migrations # 0.000 M/sec [100.00%]
63,133,119 page-faults # 0.000 M/sec
147,977,747,269,768 cycles # 1.066 GHz [100.00%]
200,524,196,493,108 stalled-cycles-frontend # 135.51% frontend cycles idle [100.00%]
105,175,163,716,388 stalled-cycles-backend # 71.07% backend cycles idle [100.00%]
180,916,213,503,160 instructions # 1.22 insns per cycle
# 1.11 stalled cycles per insn [100.00%]
26,999,511,005,868 branches # 194.536 M/sec [100.00%]
714,066,351 branch-misses # 0.00% of all branches
216.
196778807 seconds time elapsed
As with previous versions of the patch, We're getting about a 2x
performance increase here. Here's a link to the test case I used, along
with the little wrapper to activate the flag:
http://oss.sgi.com/projects/memtests/thp_pthread_mmprctlv3.tar.gz
This patch (of 3):
Revert commit
8e72033f2a48 and add in code to fix up any issues caused
by the revert.
The revert is necessary because hugepage_madvise would return -EINVAL
when VM_NOHUGEPAGE is set, which will break subsequent chunks of this
patch set.
Here's a snip of an e-mail from Gerald detailing the original purpose of
this code, and providing justification for the revert:
"The intent of commit
8e72033f2a48 was to guard against any future
programming errors that may result in an madvice(MADV_HUGEPAGE) on
guest mappings, which would crash the kernel.
Martin suggested adding the bit to arch/s390/mm/pgtable.c, if
8e72033f2a48 was to be reverted, because that check will also prevent
a kernel crash in the case described above, it will now send a
SIGSEGV instead.
This would now also allow to do the madvise on other parts, if
needed, so it is a more flexible approach. One could also say that
it would have been better to do it this way right from the
beginning..."
Signed-off-by: Alex Thorlton <athorlton@sgi.com>
Suggested-by: Oleg Nesterov <oleg@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Mon, 7 Apr 2014 22:37:07 +0000 (15:37 -0700)]
mm/compaction: clean-up code on success of ballon isolation
It is just for clean-up to reduce code size and improve readability.
There is no functional change.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Mon, 7 Apr 2014 22:37:06 +0000 (15:37 -0700)]
mm/compaction: check pageblock suitability once per pageblock
isolation_suitable() and migrate_async_suitable() is used to be sure
that this pageblock range is fine to be migragted. It isn't needed to
call it on every page. Current code do well if not suitable, but, don't
do well when suitable.
1) It re-checks isolation_suitable() on each page of a pageblock that was
already estabilished as suitable.
2) It re-checks migrate_async_suitable() on each page of a pageblock that
was not entered through the next_pageblock: label, because
last_pageblock_nr is not otherwise updated.
This patch fixes situation by 1) calling isolation_suitable() only once
per pageblock and 2) always updating last_pageblock_nr to the pageblock
that was just checked.
Additionally, move PageBuddy() check after pageblock unit check, since
pageblock check is the first thing we should do and makes things more
simple.
[vbabka@suse.cz: rephrase commit description]
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Mon, 7 Apr 2014 22:37:05 +0000 (15:37 -0700)]
mm/compaction: change the timing to check to drop the spinlock
It is odd to drop the spinlock when we scan (SWAP_CLUSTER_MAX - 1) th
pfn page. This may results in below situation while isolating
migratepage.
1. try isolate 0x0 ~ 0x200 pfn pages.
2. When low_pfn is 0x1ff, ((low_pfn+1) % SWAP_CLUSTER_MAX) == 0, so drop
the spinlock.
3. Then, to complete isolating, retry to aquire the lock.
I think that it is better to use SWAP_CLUSTER_MAX th pfn for checking the
criteria about dropping the lock. This has no harm 0x0 pfn, because, at
this time, locked variable would be false.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Mon, 7 Apr 2014 22:37:04 +0000 (15:37 -0700)]
mm/compaction: do not call suitable_migration_target() on every page
suitable_migration_target() checks that pageblock is suitable for
migration target. In isolate_freepages_block(), it is called on every
page and this is inefficient. So make it called once per pageblock.
suitable_migration_target() also checks if page is highorder or not, but
it's criteria for highorder is pageblock order. So calling it once
within pageblock range has no problem.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Mon, 7 Apr 2014 22:37:03 +0000 (15:37 -0700)]
mm/compaction: disallow high-order page for migration target
Purpose of compaction is to get a high order page. Currently, if we
find high-order page while searching migration target page, we break it
to order-0 pages and use them as migration target. It is contrary to
purpose of compaction, so disallow high-order page to be used for
migration target.
Additionally, clean-up logic in suitable_migration_target() to simplify
the code. There is no functional changes from this clean-up.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Michal Hocko [Mon, 7 Apr 2014 22:37:01 +0000 (15:37 -0700)]
mm: exclude memoryless nodes from zone_reclaim
We had a report about strange OOM killer strikes on a PPC machine
although there was a lot of swap free and a tons of anonymous memory
which could be swapped out. In the end it turned out that the OOM was a
side effect of zone reclaim which wasn't unmapping and swapping out and
so the system was pushed to the OOM. Although this sounds like a bug
somewhere in the kswapd vs. zone reclaim vs. direct reclaim
interaction numactl on the said hardware suggests that the zone reclaim
should not have been set in the first place:
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
node 0 size: 0 MB
node 0 free: 0 MB
node 2 cpus:
node 2 size: 7168 MB
node 2 free: 6019 MB
node distances:
node 0 2
0: 10 40
2: 40 10
So all the CPUs are associated with Node0 which doesn't have any memory
while Node2 contains all the available memory. Node distances cause an
automatic zone_reclaim_mode enabling.
Zone reclaim is intended to keep the allocations local but this doesn't
make any sense on the memoryless nodes. So let's exclude such nodes for
init_zone_allows_reclaim which evaluates zone reclaim behavior and
suitable reclaim_nodes.
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
Tested-by: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Davidlohr Bueso [Mon, 7 Apr 2014 22:37:01 +0000 (15:37 -0700)]
mm/memory.c: update comment in unmap_single_vma()
The described issue now occurs inside mmap_region(). And unfortunately
is still valid.
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Weijie Yang [Mon, 7 Apr 2014 22:37:00 +0000 (15:37 -0700)]
mm/vmscan: do not check compaction_ready on promoted zones
We abort direct reclaim if we find the zone is ready for compaction.
Sometimes the zone is just a promoted highmem zone to force a scan of
highmem, which is not the intended zone the caller want to allocate a
page from. In this situation, setting aborted_reclaim to indicate the
caller turned back to retry the allocation is waste of time and could
cause a loop in __alloc_pages_slowpath().
This patch does not check compaction_ready() on promoted zones to avoid
the above situation. Only set aborted_reclaim if the caller intended
zone is ready for compaction.
Signed-off-by: Weijie Yang <weijie.yang@samsung.com>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Weijie Yang [Mon, 7 Apr 2014 22:36:59 +0000 (15:36 -0700)]
mm/vmscan: restore sc->gfp_mask after promoting it to __GFP_HIGHMEM
We promote sc->gfp_mask to __GFP_HIGHMEM to forcibly scan highmem if
there are too many buffer_heads pinning highmem. See
cc715d99e5 ("mm:
vmscan: forcibly scan highmem if there are too many buffer_heads pinning
highmem").
This patch restores sc->gfp_mask to its caller original value after
finishing the scan job, to avoid the impact on other invocations from
its upper caller, such as vmpressure_prio(), shrink_slab().
Signed-off-by: Weijie Yang <weijie.yang@samsung.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Rik van Riel [Mon, 7 Apr 2014 22:36:57 +0000 (15:36 -0700)]
mm: move mmu notifier call from change_protection to change_pmd_range
The NUMA scanning code can end up iterating over many gigabytes of
unpopulated memory, especially in the case of a freshly started KVM
guest with lots of memory.
This results in the mmu notifier code being called even when there are
no mapped pages in a virtual address range. The amount of time wasted
can be enough to trigger soft lockup warnings with very large KVM
guests.
This patch moves the mmu notifier call to the pmd level, which
represents 1GB areas of memory on x86-64. Furthermore, the mmu notifier
code is only called from the address in the PMD where present mappings
are first encountered.
The hugetlbfs code is left alone for now; hugetlb mappings are not
relocatable, and as such are left alone by the NUMA code, and should
never trigger this problem to begin with.
Signed-off-by: Rik van Riel <riel@redhat.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Reported-by: Xing Gang <gang.xing@hp.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mel Gorman [Mon, 7 Apr 2014 22:36:56 +0000 (15:36 -0700)]
mm: numa: recheck for transhuge pages under lock during protection changes
Sasha reported the following bug using trinity
kernel BUG at mm/mprotect.c:149!
invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
Dumping ftrace buffer:
(ftrace buffer empty)
Modules linked in:
CPU: 20 PID: 26219 Comm: trinity-c216 Tainted: G W
3.14.0-rc5-next-20140305-sasha-00011-ge06f5f3-dirty #105
task:
ffff8800b6c80000 ti:
ffff880228436000 task.ti:
ffff880228436000
RIP: change_protection_range+0x3b3/0x500
Call Trace:
change_protection+0x25/0x30
change_prot_numa+0x1b/0x30
task_numa_work+0x279/0x360
task_work_run+0xae/0xf0
do_notify_resume+0x8e/0xe0
retint_signal+0x4d/0x92
The VM_BUG_ON was added in -mm by the patch "mm,numa: reorganize
change_pmd_range". The race existed without the patch but was just
harder to hit.
The problem is that a transhuge check is made without holding the PTL.
It's possible at the time of the check that a parallel fault clears the
pmd and inserts a new one which then triggers the VM_BUG_ON check. This
patch removes the VM_BUG_ON but fixes the race by rechecking transhuge
under the PTL when marking page tables for NUMA hinting and bailing if a
race occurred. It is not a problem for calls to mprotect() as they hold
mmap_sem for write.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Rik van Riel [Mon, 7 Apr 2014 22:36:55 +0000 (15:36 -0700)]
mm,numa: reorganize change_pmd_range()
Reorganize the order of ifs in change_pmd_range a little, in preparation
for the next patch.
[akpm@linux-foundation.org: fix indenting, per David]
Signed-off-by: Rik van Riel <riel@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Reported-by: Xing Gang <gang.xing@hp.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Naoya Horiguchi [Mon, 7 Apr 2014 22:36:54 +0000 (15:36 -0700)]
mm/hugetlb.c: add NULL check of return value of huge_pte_offset
huge_pte_offset() could return NULL, so we need NULL check to avoid
potential NULL pointer dereferences.
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fabian Frederick [Mon, 7 Apr 2014 22:36:53 +0000 (15:36 -0700)]
ntfs: logging clean-up
- Convert spinlock/static array to va_format (inspired by Joe Perches
help on previous logging patches).
- Convert printk(KERN_ERR to pr_warn in __ntfs_warning.
- Convert printk(KERN_ERR to pr_err in __ntfs_error.
- Convert printk(KERN_DEBUG to pr_debug in __ntfs_debug. (Note that
__ntfs_debug is still guarded by #if DEBUG)
- Improve !DEBUG to parse all arguments (Joe Perches).
- Sparse pr_foo() conversions in super.c
NTFS, NTFS-fs prefixes as well as 'warning' and 'error' were removed :
pr_foo() automatically adds module name and error level is already
specified.
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Cc: Anton Altaparmakov <anton@tuxera.com>
Cc: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Sun, 6 Apr 2014 17:09:38 +0000 (10:09 -0700)]
Merge tag 'nfs-for-3.15-1' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
Pull NFS client updates from Trond Myklebust:
"Highlights include:
- Stable fix for a use after free issue in the NFSv4.1 open code
- Fix the SUNRPC bi-directional RPC code to account for TCP segmentation
- Optimise usage of readdirplus when confronted with 'ls -l' situations
- Soft mount bugfixes
- NFS over RDMA bugfixes
- NFSv4 close locking fixes
- Various NFSv4.x client state management optimisations
- Rename/unlink code cleanups"
* tag 'nfs-for-3.15-1' of git://git.linux-nfs.org/projects/trondmy/linux-nfs: (28 commits)
nfs: pass string length to pr_notice message about readdir loops
NFSv4: Fix a use-after-free problem in open()
SUNRPC: rpc_restart_call/rpc_restart_call_prepare should clear task->tk_status
SUNRPC: Don't let rpc_delay() clobber non-timeout errors
SUNRPC: Ensure call_connect_status() deals correctly with SOFTCONN tasks
SUNRPC: Ensure call_status() deals correctly with SOFTCONN tasks
NFSv4: Ensure we respect soft mount timeouts during trunking discovery
NFSv4: Schedule recovery if nfs40_walk_client_list() is interrupted
NFS: advertise only supported callback netids
SUNRPC: remove KERN_INFO from dprintk() call sites
SUNRPC: Fix large reads on NFS/RDMA
NFS: Clean up: revert increase in READDIR RPC buffer max size
SUNRPC: Ensure that call_bind times out correctly
SUNRPC: Ensure that call_connect times out correctly
nfs: emit a fsnotify_nameremove call in sillyrename codepath
nfs: remove synchronous rename code
nfs: convert nfs_rename to use async_rename infrastructure
nfs: make nfs_async_rename non-static
nfs: abstract out code needed to complete a sillyrename
NFSv4: Clear the open state flags if the new stateid does not match
...
Linus Torvalds [Sun, 6 Apr 2014 16:38:07 +0000 (09:38 -0700)]
Merge tag 'modules-next-for-linus' of git://git./linux/kernel/git/rusty/linux
Pull module updates from Rusty Russell:
"Nothing major: the stricter permissions checking for sysfs broke a
staging driver; fix included. Greg KH said he'd take the patch but
hadn't as the merge window opened, so it's included here to avoid
breaking build"
* tag 'modules-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux:
staging: fix up speakup kobject mode
Use 'E' instead of 'X' for unsigned module taint flag.
VERIFY_OCTAL_PERMISSIONS: stricter checking for sysfs perms.
kallsyms: fix percpu vars on x86-64 with relocation.
kallsyms: generalize address range checking
module: LLVMLinux: Remove unused function warning from __param_check macro
Fix: module signature vs tracepoints: add new TAINT_UNSIGNED_MODULE
module: remove MODULE_GENERIC_TABLE
module: allow multiple calls to MODULE_DEVICE_TABLE() per module
module: use pr_cont
Linus Torvalds [Sun, 6 Apr 2014 15:11:57 +0000 (08:11 -0700)]
Merge git://git./linux/kernel/git/cmetcalf/linux-tile
Pull arch/tile updates from Chris Metcalf:
"These fix a few stray build issues seen in linux-next, and also add
the minimal required support for perf to tilegx"
* git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile:
arch/tile: remove unused variable 'devcap'
tile: Fix vDSO compilation issue with allyesconfig
perf tools: Allow building for tile
tile/perf: Support perf_events on tilegx and tilepro
tile: Enable NMIs on return from handle_nmi() without errors
tile: Add support for handling PMC hardware
tile: don't use __get_cpu_var() with structure-typed arguments
tile: avoid overflow in ns2cycles
Linus Torvalds [Sun, 6 Apr 2014 01:49:31 +0000 (18:49 -0700)]
Merge tag 'dm-3.15-changes' of git://git./linux/kernel/git/device-mapper/linux-dm
Pull device mapper changes from Mike Snitzer:
- Fix dm-cache corruption caused by discard_block_size > cache_block_size
- Fix a lock-inversion detected by LOCKDEP in dm-cache
- Fix a dangling bio bug in the dm-thinp target's process_deferred_bios
error path
- Fix corruption due to non-atomic transaction commit which allowed a
metadata superblock to be written before all other metadata was
successfully written -- this is common to all targets that use the
persistent-data library's transaction manager (dm-thinp, dm-cache and
dm-era).
- Various small cleanups in the DM core
- Add the dm-era target which is useful for keeping track of which
blocks were written within a user defined period of time called an
'era'. Use cases include tracking changed blocks for backup
software, and partially invalidating the contents of a cache to
restore cache coherency after rolling back a vendor snapshot.
- Improve the on-disk layout of multithreaded writes to the
dm-thin-pool by splitting the pool's deferred bio list to be a
per-thin device list and then sorting that list using an rb_tree.
The subsequent read throughput of the data written via multiple
threads improved by ~70%.
- Simplify the multipath target's handling of queuing IO by pushing
requests back to the request queue rather than queueing the IO
internally.
* tag 'dm-3.15-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: (24 commits)
dm cache: fix a lock-inversion
dm thin: sort the per thin deferred bios using an rb_tree
dm thin: use per thin device deferred bio lists
dm thin: simplify pool_is_congested
dm thin: fix dangling bio in process_deferred_bios error path
dm mpath: print more useful warnings in multipath_message()
dm-mpath: do not activate failed paths
dm mpath: remove extra nesting in map function
dm mpath: remove map_io()
dm mpath: reduce memory pressure when requeuing
dm mpath: remove process_queued_ios()
dm mpath: push back requests instead of queueing
dm table: add dm_table_run_md_queue_async
dm mpath: do not call pg_init when it is already running
dm: use RCU_INIT_POINTER instead of rcu_assign_pointer in __unbind
dm: stop using bi_private
dm: remove dm_get_mapinfo
dm: make dm_table_alloc_md_mempools static
dm: take care to copy the space map roots before locking the superblock
dm transaction manager: fix corruption due to non-atomic transaction commit
...
Linus Torvalds [Sun, 6 Apr 2014 01:46:26 +0000 (18:46 -0700)]
Merge tag 'iommu-updates-v3.15' of git://git./linux/kernel/git/joro/iommu
Pull IOMMU upates from Joerg Roedel:
"This time a few more updates queued up.
- Rework VT-d code to support ACPI devices
- Improvements for memory and PCI hotplug support in the VT-d driver
- Device-tree support for OMAP IOMMU
- Convert OMAP IOMMU to use devm_* interfaces
- Fixed PASID support for AMD IOMMU
- Other random cleanups and fixes for OMAP, ARM-SMMU and SHMOBILE
IOMMU
Most of the changes are in the VT-d driver because some rework was
necessary for better hotplug and ACPI device support"
* tag 'iommu-updates-v3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (75 commits)
iommu/vt-d: Fix error handling in ANDD processing
iommu/vt-d: returning free pointer in get_domain_for_dev()
iommu/vt-d: Only call dmar_acpi_dev_scope_init() if DRHD units present
iommu/vt-d: Check for NULL pointer in dmar_acpi_dev_scope_init()
iommu/amd: Fix logic to determine and checking max PASID
iommu/vt-d: Include ACPI devices in iommu=pt
iommu/vt-d: Finally enable translation for non-PCI devices
iommu/vt-d: Remove to_pci_dev() in intel_map_page()
iommu/vt-d: Remove pdev from intel_iommu_attach_device()
iommu/vt-d: Remove pdev from iommu_no_mapping()
iommu/vt-d: Make domain_add_dev_info() take struct device
iommu/vt-d: Make domain_remove_one_dev_info() take struct device
iommu/vt-d: Rename 'hwdev' variables to 'dev' now that that's the norm
iommu/vt-d: Remove some pointless to_pci_dev() calls
iommu/vt-d: Make get_valid_domain_for_dev() take struct device
iommu/vt-d: Make iommu_should_identity_map() take struct device
iommu/vt-d: Handle RMRRs for non-PCI devices
iommu/vt-d: Make get_domain_for_dev() take struct device
iommu/vt-d: Make domain_context_mapp{ed,ing}() take struct device
iommu/vt-d: Make device_to_iommu() cope with non-PCI devices
...
Linus Torvalds [Sun, 6 Apr 2014 01:45:11 +0000 (18:45 -0700)]
Merge branch 'hwmon-for-linus' of git://git./linux/kernel/git/jdelvare/staging
Pull hwmon updates from Jean Delvare:
"This includes a number of driver conversions to
devm_hwmon_device_register_with_groups, a few cleanups, and
support for the ITE IT8623E"
* 'hwmon-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jdelvare/staging:
hwmon: (it87) Add support for IT8623E
hwmon: (it87) Fix IT8603E define name
hwmon: (lm90) Convert to use hwmon_device_register_with_groups
hwmon: (lm90) Create all sysfs groups in one call
hwmon: (lm90) Always use the dev variable in the probe function
hwmon: (lm90) Create most optional attributes with sysfs_create_group
hwmon: Avoid initializing the same field twice
hwmon: (pc87360) Avoid initializing the same field twice
hwmon: (lm80) Convert to use devm_hwmon_device_register_with_groups
hwmon: (adm1021) Convert to use devm_hwmon_device_register_with_groups
hwmon: (lm63) Avoid initializing the same field twice
hwmon: (lm63) Convert to use devm_hwmon_device_register_with_groups
hwmon: (lm63) Create all sysfs groups in one call
hwmon: (lm63) Introduce 'dev' variable to point to client->dev
hwmon: (lm63) Add additional sysfs group for temp2_type attribute
hwmon: (
f71805f) Fix author's address
Linus Torvalds [Sun, 6 Apr 2014 01:39:18 +0000 (18:39 -0700)]
Merge tag 'clk-for-linus-3.15' of git://git.linaro.org/people/mike.turquette/linux
Pull clock framework changes from Mike Turquette:
"The clock framework changes for 3.15 look similar to past pull
requests. Mostly clock driver updates, more Device Tree support in
the form of common functions useful across platforms and a handful of
features and fixes to the framework core"
* tag 'clk-for-linus-3.15' of git://git.linaro.org/people/mike.turquette/linux: (86 commits)
clk: shmobile: fix setting paretn clock rate
clk: shmobile: rcar-gen2: fix lb/sd0/sd1/sdh clock parent to pll1
clk: Fix minor errors in of_clk_init() function comments
clk: reverse default clk provider initialization order in of_clk_init()
clk: sirf: update copyright years to 2014
clk: mmp: try to use closer one when do round rate
clk: mmp: fix the wrong calculation formula
clk: mmp: fix wrong mask when calculate denominator
clk: st: Adds quadfs clock binding
clk: st: Adds clockgen-vcc and clockgen-mux clock binding
clk: st: Adds clockgen clock binding
clk: st: Adds divmux and prediv clock binding
clk: st: Support for A9 MUX clocks
clk: st: Support for ClockGenA9/DDR/GPU
clk: st: Support for QUADFS inside ClockGenB/C/D/E/F
clk: st: Support for VCC-mux and MUX clocks
clk: st: Support for PLLs inside ClockGenA(s)
clk: st: Support for DIVMUX and PreDiv Clocks
clk: support hardware-specific debugfs entries
clk: s2mps11: Use of_get_child_by_name
...
Linus Torvalds [Sun, 6 Apr 2014 01:32:31 +0000 (18:32 -0700)]
Merge tag 'pwm/for-3.15-rc1' of git://git./linux/kernel/git/thierry.reding/linux-pwm
Pull pwm changes from Thierry Reding:
"The legacy HAVE_PWM Kconfig symbol is finally being retired. Thanks a
lot to Sascha Hauer for doing that.
Three new drivers are added: Freescale FTM, Cirrus Logic CLPS711X and
Intel Low Power Subsystem.
An assortment of fixes and cleanups rounds things off for this release
cycle"
* tag 'pwm/for-3.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/thierry.reding/linux-pwm:
pwm: pxa: Constify OF match table
pwm: pxa: Fix typo "pwm" -> "PWM"
Revert "pwm: pxa: Use of_match_ptr()"
pwm: add support for Intel Low Power Subsystem PWM
pwm: Add CLPS711X PWM support
pwm: atmel: correct CDTY calculation
pwm: atmel: Fix polarity handling
Documentation: Add device tree bindings for Freescale FTM PWM.
pwm: Add Freescale FTM PWM driver support
pwm: pxa: Use of_match_ptr()
pwm: samsung: Use SIMPLE_DEV_PM_OPS macro
pwm: renesas-tpu: Add dependency on HAS_IOMEM
pwm: Remove obsolete HAVE_PWM Kconfig symbol
Linus Torvalds [Sat, 5 Apr 2014 22:46:37 +0000 (15:46 -0700)]
Merge tag 'tags/cleanup2-3.15' of git://git./linux/kernel/git/arm/arm-soc
Pull ARM SoC late cleanups from Arnd Bergmann:
"These could not be part of the first cleanup branch, because they
either came too late in the cycle, or they have dependencies on other
branches. Important changes are:
- The integrator platform is almost multiplatform capable after some
reorganization (Linus Walleij)
- Minor cleanups on Zynq (Michal Simek)
- Lots of changes for Exynos and other Samsung platforms, including
further preparations for multiplatform support and the clocks
bindings are rearranged"
* tag 'tags/cleanup2-3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (54 commits)
devicetree: fix newly added exynos sata bindings
ARM: EXYNOS: Fix compilation error in cpuidle.c
ARM: S5P64X0: Explicitly include linux/serial_s3c.h in mach/pm-core.h
ARM: EXYNOS: Remove hardware.h file
ARM: SAMSUNG: Remove hardware.h inclusion
ARM: S3C24XX: Remove invalid code from hardware.h
dt-bindings: clock: Move exynos-audss-clk.h to dt-bindings/clock
ARM: dts: Keep some essential LDOs enabled for arndale-octa board
ARM: dts: Disable MDMA1 node for arndale-octa board
ARM: S3C64XX: Fix build for implicit serial_s3c.h inclusion
serial: s3c: Fix build of header without serial_core.h preinclusion
ARM: EXYNOS: Allow wake-up using GIC interrupts
ARM: EXYNOS: Stop using legacy Samsung PM code
ARM: EXYNOS: Remove PM initcalls and useless indirection
ARM: EXYNOS: Fix abuse of CONFIG_PM
ARM: SAMSUNG: Move s3c_pm_check_* prototypes to plat/pm-common.h
ARM: SAMSUNG: Move common save/restore helpers to separate file
ARM: SAMSUNG: Move Samsung PM debug code into separate file
ARM: SAMSUNG: Consolidate PM debug functions
ARM: SAMSUNG: Use debug_ll_addr() to get UART base address
...
Linus Torvalds [Sat, 5 Apr 2014 22:38:41 +0000 (15:38 -0700)]
Merge tag 'sh-3.15' of git://git./linux/kernel/git/arm/arm-soc
Pull ARM SoC sh driver change from Arnd Bergmann:
"The drivers/sh subdirectory used to get merged through the SH
architecture tree, but things are in flux there and some of the
drivers are shared with ARM shmobile, we have picked it up for the
time being.
There is only one trivial patch from Laurent Pinchart this time"
* tag 'sh-3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
sh: intc: Enable driver compilation with COMPILE_TEST
Linus Torvalds [Sat, 5 Apr 2014 22:37:40 +0000 (15:37 -0700)]
Merge tag 'drivers-3.15' of git://git./linux/kernel/git/arm/arm-soc
Pull ARM SoC driver changes from Arnd Bergmann:
"These changes are mostly for ARM specific device drivers that either
don't have an upstream maintainer, or that had the maintainer ask us
to pick up the changes to avoid conflicts.
A large chunk of this are clock drivers (bcm281xx, exynos, versatile,
shmobile), aside from that, reset controllers for STi as well as a
large rework of the Marvell Orion/EBU watchdog driver are notable"
* tag 'drivers-3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (99 commits)
Revert "dts: socfpga: Add DTS entry for adding the stmmac glue layer for stmmac."
Revert "net: stmmac: Add SOCFPGA glue driver"
ARM: shmobile: r8a7791: Fix SCIFA3-5 clocks
ARM: STi: Add reset controller support to mach-sti Kconfig
drivers: reset: stih416: add softreset controller
drivers: reset: stih415: add softreset controller
drivers: reset: Reset controller driver for STiH416
drivers: reset: Reset controller driver for STiH415
drivers: reset: STi SoC system configuration reset controller support
dts: socfpga: Add sysmgr node so the gmac can use to reference
dts: socfpga: Add support for SD/MMC on the SOCFPGA platform
reset: Add optional resets and stubs
ARM: shmobile: r7s72100: fix bus clock calculation
Power: Reset: Generalize qnap-poweroff to work on Synology devices.
dts: socfpga: Update clock entry to support multiple parents
ARM: socfpga: Update socfpga_defconfig
dts: socfpga: Add DTS entry for adding the stmmac glue layer for stmmac.
net: stmmac: Add SOCFPGA glue driver
watchdog: orion_wdt: Use %pa to print 'phys_addr_t'
drivers: cci: Export CCI PMU revision
...
Linus Torvalds [Sat, 5 Apr 2014 22:29:04 +0000 (15:29 -0700)]
Merge tag 'dt-3.15' of git://git./linux/kernel/git/arm/arm-soc
Pull ARM SoC device tree changes from Arnd Bergmann:
"A large part of the arm-soc patches are nowadays DT changes, adding
support for new SoCs, boards and devices without changing kernel
source. The plan is still to move the devicetree files out of the
kernel tree and reduce the amount of churn going on here, but we keep
finding reasons to delay doing that.
Changes are really all over the place, with little sticking out
particularly. We have contributions from a total of 116 people in
this branch.
Unfortunately, the size of this branch also causes a significant
number of conflicts at the moment, typically when subsystem
maintainers merge patches that change the driver at the same time as
the dts files. In most cases this could be avoided because the dts
changes are supposed to be compatible in both ways, and we are asking
everyone to send ARM dts changes through our tree only"
* tag 'dt-3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (541 commits)
dts: stmmac: Document the clocks property in the stmmac base document
dts: socfpga: Add DTS entry for adding the stmmac glue layer for stmmac.
ARM: STi: stih41x: Add support for the FSM Serial Flash Controller
ARM: STi: stih416: Add support for the FSM Serial Flash Controller
ARM: tegra: fix Dalmore pinctrl configuration
ARM: dts: keystone: use common "ti,keystone" compatible instead of -evm
ARM: dts: k2hk-evm: set ubifs partition size for 512M NAND
ARM: dts: Build all keystone dt blobs
ARM: dts: keystone: Fix control register range for clktsip
ARM: dts: keystone: Fix domain register range for clkfftc1
ARM: dts: bcm28155-ap: leave camldo1 on to fix reboot
ARM: dts: add bcm590xx pmu support and enable for bcm28155-ap
ARM: dts: bcm21664: Add device tree files.
ARM: DT: bcm21664: Device tree bindings
ARM: efm32: properly namespace i2c location property
ARM: efm32: fix unit address part in USART2 device nodes' names
ARM: mvebu: Enable NAND controller in Armada 385-DB
ARM: mvebu: Add support for NAND controller in Armada 38x SoC
ARM: mvebu: Add the Core Divider clock to Armada 38x SoCs
ARM: mvebu: Add a 2 GHz fixed-clock on Armada 38x SoCs
...
Linus Torvalds [Sat, 5 Apr 2014 21:44:13 +0000 (14:44 -0700)]
Merge tag 'boards-3.15' of git://git./linux/kernel/git/arm/arm-soc
Pull ARM SoC board changes from Arnd Bergmann:
"As we continue to replace board files with device tree descriptions,
this part of the ARM support is getting smaller. We have basically
just defconfig changes here this time, and a significant number of
Renesas shmobile changes, as Renesas is still in the process of
deprecating board file support"
* tag 'boards-3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (92 commits)
ARM: enable fhandle in multi_v7_defconfig
ARM: tegra: enable fhandle in tegra_defconfig
ARM: update multi_v7_defconfig for Tegra
ARM: add Marvell Dove and some drivers to multi_v7 defconfig
ARM: fix duplicate symbols in multi_v5_defconfig
ARM: pxa: add gpio keys information
ARM: tegra: defconfig updates
ARM: config: keystone: enable AEMIF/NAND support
ARM: qcom: Enable basic support for Qualcomm platforms in multi_v7_defconfig
ARM: kirkwood: Add HP T5325 devices to {multi|mvebu}_v5_defconfig
ARM: config: Add mvebu_v5_defconfig
ARM: config: Add a multi_v5_defconfig
ARM: shmobile: r7s72100: update defconfig for I2C usage
ARM: shmobile: Remove Lager DT reference legacy clock bits
ARM: shmobile: Remove Koelsch DT reference legacy clock bits
ARM: shmobile: Remove KZM9D board code
ARM: mvebu: update defconfigs for Armada 375 and 38x
ARM: dove: Enable watchdog support in the defconfig
ARM: mvebu: Enable watchdog support in defconfig
ARM: config: keystone: enable led support
...
Linus Torvalds [Sat, 5 Apr 2014 21:19:54 +0000 (14:19 -0700)]
Merge tag 'soc-3.15' of git://git./linux/kernel/git/arm/arm-soc
Pull ARM SoC specific changes from Arnd Bergmann:
"Lots of changes specific to one of the SoC families. Some that stick
out are:
- mach-qcom gains new features, most importantly SMP support for the
newer chips (Stephen Boyd, Rohit Vaswani)
- mvebu gains support for three new SoCs: Armada 375, 380 and 385
(Thomas Petazzoni and Free-electrons team)
- SMP support for Rockchips (Heiko Stübner)
- Lots of i.MX changes (Shawn Guo)
- Added support for BCM5301x SoC (Hauke Mehrtens)
- Multiplatform support for Marvell Kirkwood and Dove (Andrew Lunn
and Sebastian Hesselbarth doing the final part of a long journey)
- Unify davinci platforms and remove obsolete ones (Sekhar Nori, Arnd
Bergmann)"
* tag 'soc-3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (126 commits)
ARM: sunxi: Select HAVE_ARM_ARCH_TIMER
ARM: cache-tauros2: remove ARMv6 code
ARM: mvebu: don't select CONFIG_NEON
ARM: davinci: fix DT booting with default defconfig
ARM: configs: bcm_defconfig: enable bcm590xx regulator support
ARM: davinci: remove tnetv107x support
MAINTAINERS: Update ARM STi maintainers
ARM: restrict BCM_KONA_UART to ARCH_BCM_MOBILE
ARM: bcm21664: Add board support.
ARM: sunxi: Add the new watchog compatibles to the reboot code
ARM: enable ARM_HAS_SG_CHAIN for multiplatform
ARM: davinci: remove da8xx_omapl_defconfig
ARM: davinci: da8xx: fix multiple watchdog device registration
ARM: davinci: add da8xx specific configs to davinci_all_defconfig
ARM: davinci: enable da8xx build concurrently with older devices
ARM: BCM5301X: workaround suppress fault
ARM: BCM5301X: add early debugging support
ARM: BCM5301X: initial support for the BCM5301X/BCM470X SoCs with ARM CPU
ARM: mach-bcm: Remove GENERIC_TIME
ARM: shmobile: APMU: Fix warnings due to improper printk formats
...
Linus Torvalds [Sat, 5 Apr 2014 20:51:19 +0000 (13:51 -0700)]
Merge tag 'cleanup-3.15' of git://git./linux/kernel/git/arm/arm-soc
Pull ARM SoC cleanups from Arnd Bergmann:
"These cleanup patches are mainly move stuff around and should all be
harmless. They are mainly split out so that other branches can be
based on top to avoid conflicts.
Notable changes are:
- We finally remove all mach/timex.h, after CLOCK_TICK_RATE is no
longer used (Uwe Kleine-König)
- The Qualcomm MSM platform is split out into legacy mach-msm and
new-style mach-qcom, to allow easier maintainance of the new
hardware support without regressions (Kumar Gala)
- A rework of some of the Kconfig logic to simplify multiplatform
support (Rob Herring)
- Samsung Exynos gets closer to supporting multiplatform (Sachin
Kamat and others)
- mach-bcm3528 gets merged into mach-bcm (Stephen Warren)
- at91 gains some common clock framework support (Alexandre Belloni,
Jean-Jacques Hiblot and other French people)"
* tag 'cleanup-3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (89 commits)
ARM: hisi: select HAVE_ARM_SCU only for SMP
ARM: efm32: allow uncompress debug output
ARM: prima2: build reset code standalone
ARM: at91: add PWM clock
ARM: at91: move sam9261 SoC to common clk
ARM: at91: prepare common clk transition for sam9261 SoC
ARM: at91: updated the at91_dt_defconfig with support for the ADS7846
ARM: at91: dt: sam9261: Device Tree support for the at91sam9261ek
ARM: at91: dt: defconfig: Added the sam9261 to the list of DT-enabled SOCs
ARM: at91: dt: Add at91sam9261 dt SoC support
ARM: at91: switch sam9rl to common clock framework
ARM: at91/dt: define main clk frequency of at91sam9rlek
ARM: at91/dt: define at91sam9rl clocks
ARM: at91: prepare common clk transition for sam9rl SoCs
ARM: at91: prepare sam9 dt boards transition to common clk
ARM: at91: dt: sam9rl: Device Tree for the at91sam9rlek
ARM: at91/defconfig: Add the sam9rl to the list of DT-enabled SOCs
ARM: at91: Add at91sam9rl DT SoC support
ARM: at91: prepare at91sam9rl DT transition
ARM: at91/defconfig: refresh at91sam9260_9g20_defconfig
...
Linus Torvalds [Sat, 5 Apr 2014 20:44:27 +0000 (13:44 -0700)]
Merge tag 'fixes-non-critical-3.15' of git://git./linux/kernel/git/arm/arm-soc
Pull ARM SoC non-critical bug fixes from Arnd Bergmann:
"Lots of isolated bug fixes that were not found to be important enough
to be submitted before the merge window or backported into stable
kernels.
The vast majority of these came out of Arnd's randconfig testing and
just prevents running into build-time bugs in configurations that we
do not care about in practice"
* tag 'fixes-non-critical-3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (75 commits)
ARM: at91: fix a typo
ARM: moxart: fix CPU selection
ARM: tegra: fix board DT pinmux setup
ARM: nspire: Fix compiler warning
IXP4xx: Fix DMA masks.
Revert "ARM: ixp4xx: Make dma_set_coherent_mask common, correct implementation"
IXP4xx: Fix Goramo Multilink GPIO conversion.
Revert "ARM: ixp4xx: fix gpio rework"
ARM: tegra: make debug_ll code build for ARMv6
ARM: sunxi: fix build for THUMB2_KERNEL
ARM: exynos: add missing include of linux/module.h
ARM: exynos: fix l2x0 saved regs handling
ARM: samsung: select CRC32 for SAMSUNG_PM_CHECK
ARM: samsung: select ATAGS where necessary
ARM: samsung: fix SAMSUNG_PM_DEBUG Kconfig logic
ARM: samsung: allow serial driver to be disabled
ARM: s5pv210: enable IDE support in MACH_TORBRECK
ARM: s5p64x0: fix building with only one soc type
ARM: s3c64xx: select power domains only when used
ARM: s3c64xx: MACH_SMDK6400 needs HSMMC1
...
Linus Torvalds [Sat, 5 Apr 2014 20:20:43 +0000 (13:20 -0700)]
Merge branch 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm
Pull ARM changes from Russell King:
- Perf updates from Will Deacon:
- Support for Qualcomm Krait processors (run perf on your phone!)
- Support for Cortex-A12 (run perf stat on your FPGA!)
- Support for perf_sample_event_took, allowing us to automatically decrease
the sample rate if we can't handle the PMU interrupts quickly enough
(run perf record on your FPGA!).
- Basic uprobes support from David Long:
This patch series adds basic uprobes support to ARM. It is based on
patches developed earlier by Rabin Vincent. That approach of adding
hooks into the kprobes instruction parsing code was not well received.
This approach separates the ARM instruction parsing code in kprobes out
into a separate set of functions which can be used by both kprobes and
uprobes. Both kprobes and uprobes then provide their own semantic action
tables to process the results of the parsing.
- ARMv7M (microcontroller) updates from Uwe Kleine-König
- OMAP DMA updates (recently added Vinod's Ack even though they've been
sitting in linux-next for a few months) to reduce the reliance of
omap-dma on the code in arch/arm.
- SA11x0 changes from Dmitry Eremin-Solenikov and Alexander Shiyan
- Support for Cortex-A12 CPU
- Align support for ARMv6 with ARMv7 so they can cooperate better in a
single zImage.
- Addition of first AT_HWCAP2 feature bits for ARMv8 crypto support.
- Removal of IRQ_DISABLED from various ARM files
- Improved efficiency of virt_to_page() for single zImage
- Patch from Ulf Hansson to permit runtime PM callbacks to be available for
AMBA devices for suspend/resume as well.
- Finally kill asm/system.h on ARM.
* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (89 commits)
dmaengine: omap-dma: more consolidation of CCR register setup
dmaengine: omap-dma: move IRQ handling to omap-dma
dmaengine: omap-dma: move register read/writes into omap-dma.c
ARM: omap: dma: get rid of 'p' allocation and clean up
ARM: omap: move dma channel allocation into plat-omap code
ARM: omap: dma: get rid of errata global
ARM: omap: clean up DMA register accesses
ARM: omap: remove almost-const variables
ARM: omap: remove references to disable_irq_lch
dmaengine: omap-dma: cleanup errata 3.3 handling
dmaengine: omap-dma: provide register read/write functions
dmaengine: omap-dma: use cached CCR value when enabling DMA
dmaengine: omap-dma: move barrier to omap_dma_start_desc()
dmaengine: omap-dma: move clnk_ctrl setting to preparation functions
dmaengine: omap-dma: improve efficiency loading C.SA/C.EI/C.FI registers
dmaengine: omap-dma: consolidate clearing channel status register
dmaengine: omap-dma: move CCR buffering disable errata out of the fast path
dmaengine: omap-dma: provide register definitions
dmaengine: omap-dma: consolidate setup of CCR
dmaengine: omap-dma: consolidate setup of CSDP
...
Linus Torvalds [Sat, 5 Apr 2014 20:19:32 +0000 (13:19 -0700)]
Merge branch 'for-linus' of git://git./linux/kernel/git/rkuo/linux-hexagon-kernel
Pull Hexagon updates from Richard Kuo:
"Mostly cleanups for compilation with allmodconfig and some other
miscellaneous fixes"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rkuo/linux-hexagon-kernel:
Hexagon: update CR year for elf.h
Hexagon: remove SP macro
Hexagon: set ELF_EXEC_PAGESIZE to PAGE_SIZE
Hexagon: set the e_flags in user regset view for core dumps
Hexagon: fix atomic_set
Hexagon: add screen_info for VGA_CONSOLE
hexagon: correct type on pgd copy
smp, hexagon: kill SMP single function call interrupt
arch: hexagon: include: asm: add generic macro 'mmiowb' in "io.h"
arch: hexagon: kernel: hexagon_ksyms.c: export related symbols which various modules need
arch: hexagon: kernel: reset.c: use function pointer instead of function for pm_power_off and export it
arch: hexagon: include: asm: add "vga.h" in Kbuild
arch: hexagon: include: asm: Kbuild: add generic "serial.h" in Kbuild
arch: hexagon: include: uapi: asm: setup.h add swith macro __KERNEL__
arch: hexagon: include: asm: add prefix "hvm[ci]_" for all enum members in "hexagon_vm.h"
arch: hexagon: Kconfig: add HAVE_DMA_ATTR in Kconfig and remove "linux/dma-mapping.h" from "asm/dma-mapping.h"
arch: hexagon: kernel: add export symbol function __delay()
hexagon: include: asm: kgdb: extend DBG_MAX_REG_NUM for "cs0/1"
hexagon: kernel: kgdb: include related header for pass compiling.
hexagon: kernel: remove useless variables 'dn', 'r' and 'err' in time_init_deferred() in "time.c"
Linus Torvalds [Sat, 5 Apr 2014 20:18:52 +0000 (13:18 -0700)]
Merge branch 'for-next' of git://git./linux/kernel/git/gerg/m68knommu
Pull m68k fixes from Greg Ungerer:
"Just a couple of fixes. Clean up compile warnings by using correct
types in function args, and clean out the removed CONFIG_MTD_PARTITIONS"
* 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/gerg/m68knommu:
m68knommu: fix arg types for outs* functions
m68k : Kill CONFIG_MTD_PARTITIONS
Linus Torvalds [Sat, 5 Apr 2014 20:10:00 +0000 (13:10 -0700)]
Merge branch 'topic/exynos' of git://git./linux/kernel/git/mchehab/linux-media
Pull exynos media updates from Mauro Carvalho Chehab:
"These are the remaining patches I have for the merge windows. It
basically adds a new sensor and adds the needed DT bits for it to
work"
* 'topic/exynos' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media:
[media] s5p-fimc: Remove reference to outdated macro
[media] s5p-jpeg: Fix broken indentation in jpeg-regs.h
[media] exynos4-is: Add the FIMC-IS ISP capture DMA driver
[media] exynos4-is: Add support for asynchronous subdevices registration
[media] exynos4-is: Add clock provider for the SCLK_CAM clock outputs
[media] exynos4-is: Use external s5k6a3 sensor driver
[media] V4L: s5c73m3: Add device tree support
[media] V4L: Add driver for s5k6a3 image sensor
[media] Documentation: devicetree: Update Samsung FIMC DT binding
[media] Documentation: dt: Add binding documentation for S5C73M3 camera
[media] Documentation: dt: Add binding documentation for S5K6A3 image sensor
Jeff Layton [Sat, 5 Apr 2014 12:45:57 +0000 (08:45 -0400)]
nfs: pass string length to pr_notice message about readdir loops
There is no guarantee that the strings in the nfs_cache_array will be
NULL-terminated. In the event that we end up hitting a readdir loop, we
need to ensure that we pass the warning message the length of the
string.
Reported-by: Lachlan McIlroy <lmcilroy@redhat.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Linus Torvalds [Sat, 5 Apr 2014 04:28:36 +0000 (21:28 -0700)]
Merge tag 'fbdev-main-3.15' of git://git./linux/kernel/git/tomba/linux
Pull fbdev changes from Tomi Valkeinen:
"Various fbdev fixes and improvements, but nothing big"
* tag 'fbdev-main-3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/tomba/linux: (38 commits)
fbdev: Make the switch from generic to native driver less alarming
Video: atmel: avoid the id of fix screen info is overwritten
video: imxfb: Add DT default contrast control register property.
video: atmel_lcdfb: ensure the hardware is initialized with the correct mode
fbdev: vesafb: add dev->remove() callback
fbdev: efifb: add dev->remove() callback
video: pxa3xx-gcu: switch to devres functions
video: pxa3xx-gcu: provide an empty .open call
video: pxa3xx-gcu: pass around struct device *
video: pxa3xx-gcu: rename some symbols
sisfb: fix 1280x720 resolution support
video: fbdev: uvesafb: Remove impossible code path in uvesafb_init_info
video: fbdev: uvesafb: Remove redundant NULL check in uvesafb_remove
fbdev: FB_OPENCORES should depend on HAS_DMA
OMAPDSS: convert pixel clock to common videomode style
OMAPDSS: Remove unused get_context_loss_count support
OMAPDSS: use DISPC register to detect context loss
video: da8xx-fb: Use "SIMPLE_DEV_PM_OPS" macro
video: imxfb: Convert to SIMPLE_DEV_PM_OPS
video: imxfb: Resolve mismatch between backlight/contrast
...
Linus Torvalds [Sat, 5 Apr 2014 04:27:43 +0000 (21:27 -0700)]
Merge tag 'ktest-v3.15' of git://git./linux/kernel/git/rostedt/linux-ktest
Pull single ktest fix from Steven Rostedt:
"This just contains a single update by Satoru Takeuchi, which adds
CLOSE_CONSOLE_SIGNAL to the kvm.conf file, as the kvm guest requires a
different signal than a normal console uses"
* tag 'ktest-v3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-ktest:
ktest: Set CLOSE_CONSOLE_SIGNAL in the kvm.conf
Linus Torvalds [Sat, 5 Apr 2014 04:25:28 +0000 (21:25 -0700)]
Merge tag 'random_for_linus' of git://git./linux/kernel/git/tytso/random
Pull /dev/random changes from Ted Ts'o:
"A number of cleanups plus support for the RDSEED instruction, which
will be showing up in Intel Broadwell CPU's"
* tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
random: Add arch_has_random[_seed]()
random: If we have arch_get_random_seed*(), try it before blocking
random: Use arch_get_random_seed*() at init time and once a second
x86, random: Enable the RDSEED instruction
random: use the architectural HWRNG for the SHA's IV in extract_buf()
random: clarify bits/bytes in wakeup thresholds
random: entropy_bytes is actually bits
random: simplify accounting code
random: tighten bound on random_read_wakeup_thresh
random: forget lock in lockless accounting
random: simplify accounting logic
random: fix comment on "account"
random: simplify loop in random_read
random: fix description of get_random_bytes
random: fix comment on proc_do_uuid
random: fix typos / spelling errors in comments
Richard Kuo [Thu, 2 Jan 2014 23:17:17 +0000 (17:17 -0600)]
Hexagon: update CR year for elf.h
Signed-off-by: Richard Kuo <rkuo@codeaurora.org>
Richard Kuo [Mon, 30 Dec 2013 20:21:12 +0000 (14:21 -0600)]
Hexagon: remove SP macro
The SP/r29 macro wasn't used anywhere else and was causing conflicts
with another module, so just remove it.
Signed-off-by: Richard Kuo <rkuo@codeaurora.org>
Richard Kuo [Mon, 12 Aug 2013 18:13:00 +0000 (13:13 -0500)]
Hexagon: set ELF_EXEC_PAGESIZE to PAGE_SIZE
Signed-off-by: Richard Kuo <rkuo@codeaurora.org>