Joe Thornber [Mon, 6 Oct 2014 12:48:51 +0000 (13:48 +0100)]
dm bufio: switch from a huge hash table to an rbtree
Converting over to using an rbtree eliminates a fixed 8MB allocation
from vmalloc space for the hash table.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Joe Thornber [Mon, 10 Nov 2014 15:03:24 +0000 (15:03 +0000)]
dm btree: fix a recursion depth bug in btree walking code
The walk code was using a 'ro_spine' to hold it's locked btree nodes.
But this data structure is designed for the rolling lock scheme, and
as such automatically unlocks blocks that are two steps up the call
chain. This is not suitable for the simple recursive walk algorithm,
which retraces its steps.
This code is only used by the persistent array code, which in turn is
only used by dm-cache. In order to trigger it you need to have a
mapping tree that is more than 2 levels deep; which equates to 8-16
million cache blocks. For instance a 4T ssd with a very small block
size of 32k only just triggers this bug.
The fix just places the locked blocks on the stack, and stops using
the ro_spine altogether.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
Joe Thornber [Fri, 10 Oct 2014 08:41:09 +0000 (09:41 +0100)]
dm thin: grab a virtual cell before looking up the mapping
Avoids normal IO racing with discard.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
Heinz Mauelshagen [Wed, 29 Oct 2014 18:02:27 +0000 (19:02 +0100)]
dm raid: fix inaccessible superblocks causing oops in configure_discard_support
Commit
48cf06bc5f ("dm raid: add discard support for RAID levels 4, 5
and 6") did not properly handle missing metadata device(s). A failing
read of the superblock causes the metadata and data devices to be
removed from the dev array in struct raid_set, setting references to
both devices to NULL. configure_discard_support() nonetheless tries to
access the data dev unconditionally causing an oops.
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Heinz Mauelshagen [Fri, 17 Oct 2014 11:38:50 +0000 (13:38 +0200)]
dm raid: ensure superblock's size matches device's logical block size
The dm-raid superblock (struct dm_raid_superblock) is padded to 512
bytes and that size is being used to read it in from the metadata
device into one preallocated page.
Reading or writing this on a 512-byte sector device works fine but on
a 4096-byte sector device this fails.
Set the dm-raid superblock's size to the logical block size of the
metadata device, because IO at that size is guaranteed too work. Also
add a size check to avoid silent partial metadata loss in case the
superblock should ever grow past the logical block size or PAGE_SIZE.
[includes pointer math fix from Dan Carpenter]
Reported-by: "Liuhua Wang" <lwang@suse.com>
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
Mikulas Patocka [Thu, 16 Oct 2014 18:45:20 +0000 (14:45 -0400)]
dm bufio: change __GFP_IO to __GFP_FS in shrinker callbacks
The shrinker uses gfp flags to indicate what kind of operation can the
driver wait for. If __GFP_IO flag is present, the driver can wait for
block I/O operations, if __GFP_FS flag is present, the driver can wait on
operations involving the filesystem.
dm-bufio tested for __GFP_IO. However, dm-bufio can run on a loop block
device that makes calls into the filesystem. If __GFP_IO is present and
__GFP_FS isn't, dm-bufio could still block on filesystem operations if it
runs on a loop block device.
The change from __GFP_IO to __GFP_FS supposedly fixes one observed (though
unreproducible) deadlock involving dm-bufio and loop device.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
Pavitra Kumar [Fri, 10 Oct 2014 15:19:46 +0000 (15:19 +0000)]
dm stripe: fix potential for leak in stripe_ctr error path
Fix a potential struct stripe_c leak that would occur if the
chunk_size exceeded the maximum allowed by dm_set_target_max_io_len
(UINT_MAX). However, in practice there is no possibility of this
occuring given that chunk_size is of type uint32_t. But it is good to
fix this to future-proof in case dm_set_target_max_io_len's
implementation were to change.
Signed-off-by: Pavitra Kumar <pavitrak@nvidia.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Alexey Khoroshilov [Wed, 1 Oct 2014 20:58:35 +0000 (22:58 +0200)]
dm log userspace: fix memory leak in dm_ulog_tfr_init failure path
If cn_add_callback() fails in dm_ulog_tfr_init(), it does not
deallocate prealloced memory but calls cn_del_callback().
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Alexey Khoroshilov <khoroshilov@ispras.ru>
Reviewed-by: Jonathan Brassow <jbrassow@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
Mikulas Patocka [Wed, 1 Oct 2014 17:29:48 +0000 (13:29 -0400)]
dm bufio: when done scanning return from __scan immediately
When __scan frees the required number of buffer entries that the
shrinker requested (nr_to_scan becomes zero) it must return. Before
this fix the __scan code exited only the inner loop and continued in the
outer loop -- which could result in reduced performance due to extra
buffers being freed (e.g. unnecessarily evicted thinp metadata needing
to be synchronously re-read into bufio's cache).
Also, move dm_bufio_cond_resched to __scan's inner loop, so that
iterating the bufio client's lru lists doesn't result in scheduling
latency.
Reported-by: Joe Thornber <thornber@redhat.com>
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org # 3.2+
Joe Thornber [Tue, 30 Sep 2014 08:32:46 +0000 (09:32 +0100)]
dm bufio: update last_accessed when relinking a buffer
The 'last_accessed' member of the dm_buffer structure was only set when
the the buffer was created. This led to each buffer being discarded
after dm_bufio_max_age time even if it was used recently. In practice
this resulted in all thinp metadata being evicted soon after being read
-- this is particularly problematic for metadata intensive workloads
like multithreaded small random IO.
'last_accessed' is now updated each time the buffer is moved to the head
of the LRU list, so the buffer is now properly discarded if it was not
used in dm_bufio_max_age time.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org # v3.2+
Heinz Mauelshagen [Wed, 24 Sep 2014 15:47:19 +0000 (17:47 +0200)]
dm raid: add discard support for RAID levels 4, 5 and 6
In case of RAID levels 4, 5 and 6 we have to verify each RAID members'
ability to zero data on discards to avoid stripe data corruption -- if
discard_zeroes_data is not set for each RAID member discard support must
be disabled. But given the uncertainty of whether or not a RAID member
properly supports zeroing data on discard we require the user to
explicitly allow discard support on RAID levels 4, 5, and 6 by setting
a dm-raid module paramter, e.g.: dm-raid.devices_handle_discard_safely=Y
Otherwise, discards could cause data corruption on RAID4/5/6.
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Heinz Mauelshagen [Wed, 24 Sep 2014 15:47:18 +0000 (17:47 +0200)]
dm raid: add discard support for RAID levels 1 and 10
Discard support is not enabled for RAID levels 4, 5, and 6 at this time
due to concerns about unreliable discard_zeroes_data support on some
hardware. Otherwise, discards could cause stripe data corruption
(classic example of bad apples spoiling the bunch).
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Benjamin Marzinski [Wed, 13 Aug 2014 18:53:43 +0000 (13:53 -0500)]
dm: allow active and inactive tables to share dm_devs
Until this change, when loading a new DM table, DM core would re-open
all of the devices in the DM table. Now, DM core will avoid redundant
device opens (and closes when destroying the old table) if the old
table already has a device open using the same mode. This is achieved
by managing reference counts on the table_devices that DM core now
stores in the mapped_device structure (rather than in the dm_table
structure). So a mapped_device's active and inactive dm_tables' dm_dev
lists now just point to the dm_devs stored in the mapped_device's
table_devices list.
This improvement in DM core's device reference counting has the
side-effect of fixing a long-standing limitation of the multipath
target: a DM multipath table couldn't include any paths that were unusable
(failed). For example: if all paths have failed and you add a new,
working, path to the table; you can't use it since the table load would
fail due to it still containing failed paths. Now a re-load of a
multipath table can include failed devices and when those devices become
active again they can be used instantly.
The device list code in dm.c isn't a straight copy/paste from the code in
dm-table.c, but it's very close (aside from some variable renames). One
subtle difference is that find_table_device for the tables_devices list
will only match devices with the same name and mode. This is because we
don't want to upgrade a device's mode in the active table when an
inactive table is loaded.
Access to the mapped_device structure's tables_devices list requires a
mutex (tables_devices_lock), so that tables cannot be created and
destroyed concurrently.
Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Benjamin Marzinski [Wed, 13 Aug 2014 18:53:42 +0000 (13:53 -0500)]
dm mpath: stop queueing IO when no valid paths exist
'queue_io' is set so that IO is queued while paths are being
initialized. Clear queue_io in __choose_pgpath if there are no valid
paths, since there are obviously no paths that can be initialized.
Otherwise IOs to the device will back up.
Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Junichi Nomura [Fri, 3 Oct 2014 11:55:26 +0000 (11:55 +0000)]
dm: use bioset_create_nobvec()
Since DM core uses bio_clone_fast() for both bio-based and request-based
DM devices there is no need for DM's bioset to have a bvec mempool.
With this patch, on arch with 4KB page for example, memory usage will be
reduced by 64KB for each bio-based DM device and 1MB for each
request-based DM device.
For example, when you create 10,000 bio-based DM devices and 1,000
request-based DM devices, memory usage of biovec under no load is:
# grep biovec /proc/slabinfo
biovec-256 418068 418068 4096 ...
biovec-128 0 0 2048 ...
biovec-64 0 0 1024 ...
biovec-16 0 0 256 ...
With this patch series applied, the usage becomes:
# grep biovec /proc/slabinfo
biovec-256 116 116 4096 ...
biovec-128 0 0 2048 ...
biovec-64 0 0 1024 ...
biovec-16 0 0 256 ...
So 4096 * (418068 - 116) = 1.6GB of memory is saved in this example.
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Junichi Nomura [Fri, 3 Oct 2014 11:55:16 +0000 (11:55 +0000)]
dm: remove nr_iovecs parameter from alloc_tio()
alloc_tio() uses bio_alloc_bioset() to allocate a clone-bio for a bio.
alloc_tio() takes the number of bvecs to allocate for the clone-bio.
However, with v3.14's immutable biovec changes DM now uses
__bio_clone_fast() and no longer needs to allocate bvecs.
In practice, the 'nr_iovecs' passed to alloc_tio() is always effectively
0. __clone_and_map_simple_bio() looked like it was passing non-zero
nr_iovecs, but its value was always within the range of inline bvecs and
no allocation actually happened. If allocation happened, the BUG_ON() in
__bio_clone_fast() would've triggered.
Remove the nr_iovecs parameter from alloc_tio() to prevent possible
future bio_alloc_bioset() mis-use of a new bioset interface that will no
longer allow bvecs to be allocated.
Also fix extra whitespace before the __bio_clone_fast() call in
__clone_and_map_simple_bio().
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Junichi Nomura [Fri, 3 Oct 2014 21:27:12 +0000 (17:27 -0400)]
block: add bioset_create_nobvec()
Users of bio_clone_fast() do not want bios with their own bvecs.
Allocating a bvec mempool as part of the bioset intended for such users
is a waste of memory.
bioset_create_nobvec() creates a bioset that doesn't have the bvec
mempool.
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Junichi Nomura [Fri, 3 Oct 2014 21:27:11 +0000 (17:27 -0400)]
block: use bio_clone_fast() in blk_rq_prep_clone()
Request cloning clones bios in the request to track the completion
of each bio.
For that purpose, we can use bio_clone_fast() instead of bio_clone()
to avoid unnecessary allocation and copy of bvecs.
This patch reduces memory footprint of request-based device-mapper
(about 1-4KB for each request) and is a preparation for further
reduction of memory usage by removing unused bvec mempool.
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Hannes Reinecke [Wed, 1 Oct 2014 12:32:31 +0000 (14:32 +0200)]
block: misplaced rq_complete tracepoint
The rq_complete tracepoint was never issued for empty requests,
causing the resulting blktrace information to never show any
completion for those request.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Fri, 26 Sep 2014 23:20:08 +0000 (19:20 -0400)]
sd: Honor block layer integrity handling flags
A set of flags introduced in the block layer enable better control over
how protection information is handled. These flags are useful for both
error injection and data recovery purposes. Checking can be enabled and
disabled for controller and disk, and the guard tag format is now a
per-I/O property.
Update sd_protect_op to communicate the relevant information to the
low-level device driver via a set of flags in scsi_cmnd.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Rasmus Villemoes [Tue, 16 Sep 2014 20:51:16 +0000 (22:51 +0200)]
block: Replace strnicmp with strncasecmp
The kernel used to contain two functions for length-delimited,
case-insensitive string comparison, strnicmp with correct semantics
and a slightly buggy strncasecmp. The latter is the POSIX name, so
strnicmp was renamed to strncasecmp, and strnicmp made into a wrapper
for the new strncasecmp to avoid breaking existing users.
To allow the compat wrapper strnicmp to be removed at some point in
the future, and to avoid the extra indirection cost, do
s/strnicmp/strncasecmp/g.
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Fri, 26 Sep 2014 23:20:07 +0000 (19:20 -0400)]
block: Add T10 Protection Information functions
The T10 Protection Information format is also used by some devices that
do not go through the SCSI layer (virtual block devices, NVMe). Relocate
the relevant functions to a block layer library that can be used without
involving SCSI.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Fri, 26 Sep 2014 23:20:06 +0000 (19:20 -0400)]
block: Don't merge requests if integrity flags differ
We'd occasionally merge requests with conflicting integrity flags.
Introduce a merge helper which checks that the requests have compatible
integrity payloads.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Fri, 26 Sep 2014 23:20:05 +0000 (19:20 -0400)]
block: Integrity checksum flag
Make the choice of checksum a per-I/O property by introducing a flag
that can be inspected by the SCSI layer. There are several reasons for
this:
1. It allows us to switch choice of checksum without unloading and
reloading the HBA driver.
2. During error recovery we need to be able to tell the HBA that
checksums read from disk should not be verified and converted to IP
checksums.
3. For error injection purposes we need to be able to write a bad guard
tag to storage. Since the storage device only supports T10 CRC we
need to be able to disable IP checksum conversion on the HBA.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Fri, 26 Sep 2014 23:20:04 +0000 (19:20 -0400)]
block: Relocate bio integrity flags
Move flags affecting the integrity code out of the bio bi_flags and into
the block integrity payload.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Fri, 26 Sep 2014 23:20:03 +0000 (19:20 -0400)]
block: Add a disk flag to block integrity profile
So far we have relied on the app tag size to determine whether a disk
has been formatted with T10 protection information or not. However, not
all target devices provide application tag storage.
Add a flag to the block integrity profile that indicates whether the
disk has been formatted with protection information.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Sagi Grimberg <sagig@dev.mellanox.co.il>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Fri, 26 Sep 2014 23:20:02 +0000 (19:20 -0400)]
block: Add prefix to block integrity profile flags
Add a BLK_ prefix to the integrity profile flags. Also rename the flags
to be more consistent with the generate/verify terminology in the rest
of the integrity code.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Fri, 26 Sep 2014 23:20:01 +0000 (19:20 -0400)]
block: Clean up the code used to generate and verify integrity metadata
Instead of the "operate" parameter we pass in a seed value and a pointer
to a function that can be used to process the integrity metadata. The
generation function is changed to have a return value to fit into this
scheme.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Fri, 26 Sep 2014 23:20:00 +0000 (19:20 -0400)]
block: Make protection interval calculation generic
Now that the protection interval has been detached from the sector size
we need to be able to handle sizes that are different from 4K and
512. Make the interval calculation generic.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Fri, 26 Sep 2014 23:19:59 +0000 (19:19 -0400)]
block: Deprecate the use of the term sector in the context of block integrity
The protection interval is not necessarily tied to the logical block
size of a block device. Stop using the terms "sector" and "sectors".
Going forward we will use the term "seed" to describe the initial
reference tag value for a given I/O. "Interval" will be used to describe
the portion of the data buffer that a given piece of protection
information is associated with.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Fri, 26 Sep 2014 23:19:58 +0000 (19:19 -0400)]
block: Remove bip_buf
bip_buf is not really needed so we can remove it.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Fri, 26 Sep 2014 23:19:57 +0000 (19:19 -0400)]
block: Remove integrity tagging functions
None of the filesystems appear interested in using the integrity tagging
feature. Potentially because very few storage devices actually permit
using the application tag space.
Remove the tagging functions.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Fri, 26 Sep 2014 23:19:56 +0000 (19:19 -0400)]
block: Replace bi_integrity with bi_special
For commands like REQ_COPY we need a way to pass extra information along
with each bio. Like integrity metadata this information must be
available at the bottom of the stack so bi_private does not suffice.
Rename the existing bi_integrity field to bi_special and make it a union
so we can have different bio extensions for each class of command.
We previously used bi_integrity != NULL as a way to identify whether a
bio had integrity metadata or not. Introduce a REQ_INTEGRITY to be the
indicator now that bi_special can contain different things.
In addition, bio_integrity(bio) will now return a pointer to the
integrity payload (when applicable).
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Fri, 26 Sep 2014 23:19:55 +0000 (19:19 -0400)]
block: Get rid of bdev_integrity_enabled()
bdev_integrity_enabled() is only used by bio_integrity_enabled().
Combine these two functions.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Thu, 25 Sep 2014 15:23:47 +0000 (23:23 +0800)]
blk-mq: support per-distpatch_queue flush machinery
This patch supports to run one single flush machinery for
each blk-mq dispatch queue, so that:
- current init_request and exit_request callbacks can
cover flush request too, then the buggy copying way of
initializing flush request's pdu can be fixed
- flushing performance gets improved in case of multi hw-queue
In fio sync write test over virtio-blk(4 hw queues, ioengine=sync,
iodepth=64, numjobs=4, bs=4K), it is observed that througput gets
increased a lot over my test environment:
- throughput: +70% in case of virtio-blk over null_blk
- throughput: +30% in case of virtio-blk over SSD image
The multi virtqueue feature isn't merged to QEMU yet, and patches for
the feature can be found in below tree:
git://kernel.ubuntu.com/ming/qemu.git v2.1.0-mq.4
And simply passing 'num_queues=4 vectors=5' should be enough to
enable multi queue(quad queue) feature for QEMU virtio-blk.
Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Thu, 25 Sep 2014 15:23:46 +0000 (23:23 +0800)]
block: introduce 'blk_mq_ctx' parameter to blk_get_flush_queue
This patch adds 'blk_mq_ctx' parameter to blk_get_flush_queue(),
so that this function can find the corresponding blk_flush_queue
bound with current mq context since the flush queue will become
per hw-queue.
For legacy queue, the parameter can be simply 'NULL'.
For multiqueue case, the parameter should be set as the context
from which the related request is originated. With this context
info, the hw queue and related flush queue can be found easily.
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Thu, 25 Sep 2014 15:23:45 +0000 (23:23 +0800)]
block: flush: avoid to figure out flush queue unnecessarily
Just figuring out flush queue at the entry of kicking off flush
machinery and request's completion handler, then pass it through.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Thu, 25 Sep 2014 15:23:44 +0000 (23:23 +0800)]
block: remove blk_init_flush() and its pair
Now mission of the two helpers is over, and just call
blk_alloc_flush_queue() and blk_free_flush_queue() directly.
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Thu, 25 Sep 2014 15:23:43 +0000 (23:23 +0800)]
block: introduce blk_flush_queue to drive flush machinery
This patch introduces 'struct blk_flush_queue' and puts all
flush machinery related fields into this structure, so that
- flush implementation details aren't exposed to driver
- it is easy to convert to per dispatch-queue flush machinery
This patch is basically a mechanical replacement.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Thu, 25 Sep 2014 15:23:42 +0000 (23:23 +0800)]
block: avoid to use q->flush_rq directly
This patch trys to use local variable to access flush request,
so that we can convert to per-queue flush machinery a bit easier.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Thu, 25 Sep 2014 15:23:41 +0000 (23:23 +0800)]
block: move flush initialization to blk_flush_init
These fields are always used with the flush request, so
initialize them together.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Thu, 25 Sep 2014 15:23:40 +0000 (23:23 +0800)]
block: introduce blk_init_flush and its pair
These two temporary functions are introduced for holding flush
initialization and de-initialization, so that we can
introduce 'flush queue' easier in the following patch. And
once 'flush queue' and its allocation/free functions are ready,
they will be removed for sake of code readability.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Thu, 25 Sep 2014 15:23:39 +0000 (23:23 +0800)]
blk-mq: allocate flush_rq in blk_mq_init_flush()
It is reasonable to allocate flush req in blk_mq_init_flush().
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Thu, 25 Sep 2014 15:23:38 +0000 (23:23 +0800)]
blk-mq: handle failure path for initializing hctx
Failure of initializing one hctx isn't handled, so this patch
introduces blk_mq_init_hctx() and its pair to handle it explicitly.
Also this patch makes code cleaner.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Mon, 22 Sep 2014 13:59:31 +0000 (15:59 +0200)]
scsi: move blk_mq_start_request call earlier
Some ATA drivers need the dma drain size workaround, and thus need to
call blk_mq_start_request before the S/G mapping.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Mon, 22 Sep 2014 16:21:48 +0000 (10:21 -0600)]
block: fix blk_abort_request on blk-mq
Signed-off-by: Christoph Hellwig <hch@lst.de>
Moved blk_mq_rq_timed_out() definition to the private blk-mq.h header.
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Fri, 19 Sep 2014 13:53:46 +0000 (21:53 +0800)]
blk-timeout: fix blk_add_timer
Commit
8cb34819cdd5d(blk-mq: unshared timeout handler) introduces
blk-mq's own timeout handler, and removes following line:
blk_queue_rq_timed_out(q, blk_mq_rq_timed_out);
which then causes blk_add_timer() to bypass adding the timer,
since blk-mq no longer has q->rq_timed_out_fn defined.
This patch fixes the problem by bypassing the check for blk-mq,
so that both request deadlines are still set and the rolling
timer updated.
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 17 Sep 2014 14:27:03 +0000 (08:27 -0600)]
blk-mq: limit memory consumption if a crash dump is active
It's not uncommon for crash dump kernels to be limited to 128MB or
something low in that area. This is normally not a problem for
devices as we don't use that much memory, but for some shared SCSI
setups with huge queue depths, it can potentially fill most of
memory with tons of request allocations. blk-mq does scale back
when it fails to allocate memory, but it scales back just enough
so that blk-mq succeeds. This could still leave the system with
not enough memory to make any real progress.
Check if we are in a kdump environment and limit the hardware
queues and tag depth.
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Wed, 17 Sep 2014 09:47:58 +0000 (17:47 +0800)]
blk-mq: remove unnecessary blk_clear_rq_complete()
This patch removes two unnecessary blk_clear_rq_complete(),
the REQ_ATOM_COMPLETE flag is cleared inside blk_mq_start_request(),
so:
- The blk_clear_rq_complete() in blk_flush_restore_request()
needn't because the request will be freed later, and clearing
it here may open a small race window with timeout.
- The blk_clear_rq_complete() in blk_mq_requeue_request() isn't
necessary too, even though REQ_ATOM_STARTED is cleared in
__blk_mq_requeue_request(), in theory it still may cause a small
race window with timeout since the two clear_bit() may be
reordered.
Signed-off-by: Ming Lei <ming.lei@canoical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Sat, 13 Sep 2014 23:40:13 +0000 (16:40 -0700)]
blk-mq: pass a reserved argument to the timeout handler
Allow blk-mq to pass an argument to the timeout handler to indicate
if we're timing out a reserved or regular command. For many drivers
those need to be handled different.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Sat, 13 Sep 2014 23:40:12 +0000 (16:40 -0700)]
blk-mq: unshared timeout handler
Duplicate the (small) timeout handler in blk-mq so that we can pass
arguments more easily to the driver timeout handler. This enables
the next patch.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Sat, 13 Sep 2014 23:40:11 +0000 (16:40 -0700)]
blk-mq: fix and simplify tag iteration for the timeout handler
Don't do a kmalloc from timer to handle timeouts, chances are we could be
under heavy load or similar and thus just miss out on the timeouts.
Fortunately it is very easy to just iterate over all in use tags, and doing
this properly actually cleans up the blk_mq_busy_iter API as well, and
prepares us for the next patch by passing a reserved argument to the
iterator.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Sat, 13 Sep 2014 23:40:10 +0000 (16:40 -0700)]
blk-mq: rename blk_mq_end_io to blk_mq_end_request
Now that we've changed the driver API on the submission side use the
opportunity to fix up the name on the completion side to fit into the
general scheme.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Sat, 13 Sep 2014 23:40:09 +0000 (16:40 -0700)]
blk-mq: call blk_mq_start_request from ->queue_rq
When we call blk_mq_start_request from the core blk-mq code before calling into
->queue_rq there is a racy window where the timeout handler can hit before we've
fully set up the driver specific part of the command.
Move the call to blk_mq_start_request into the driver so the driver can start
the request only once it is fully set up.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Sat, 13 Sep 2014 23:40:08 +0000 (16:40 -0700)]
blk-mq: remove REQ_END
Pass an explicit parameter for the last request in a batch to ->queue_rq
instead of using a request flag. Besides being a cleaner and non-stateful
interface this is also required for the next patch, which fixes the blk-mq
I/O submission code to not start a time too early.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Mon, 22 Sep 2014 17:57:32 +0000 (11:57 -0600)]
Merge branch 'for-linus' into for-3.18/core
Moving patches from for-linus to 3.18 instead, pull in this changes
that will go to Linus today.
Jens Axboe [Fri, 19 Sep 2014 19:10:29 +0000 (13:10 -0600)]
blk-mq: use blk_mq_start_hw_queues() when running requeue work
When requests are retried due to hw or sw resource shortages,
we often stop the associated hardware queue. So ensure that we
restart the queues when running the requeue work, otherwise the
queue run will be a no-op.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 19 Sep 2014 14:04:53 +0000 (08:04 -0600)]
blk-mq: fix potential oops on out-of-memory in __blk_mq_alloc_rq_maps()
__blk_mq_alloc_rq_maps() can be invoked multiple times, if we scale
back the queue depth if we are low on memory. So don't clear
set->tags when we fail, this is handled directly in
the parent function, blk_mq_alloc_tag_set().
Reported-by: Robert Elliott <Elliott@hp.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 16 Sep 2014 21:44:07 +0000 (14:44 -0700)]
blk-mq: avoid infinite recursion with the FUA flag
We should not insert requests into the flush state machine from
blk_mq_insert_request. All incoming flush requests come through
blk_{m,s}q_make_request and are handled there, while blk_execute_rq_nowait
should only be called for BLOCK_PC requests. All other callers
deal with requests that already went through the flush statemchine
and shouldn't be reinserted into it.
Reported-by: Robert Elliott <Elliott@hp.com>
Debugged-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
David Hildenbrand [Thu, 18 Sep 2014 09:04:31 +0000 (11:04 +0200)]
blk-mq: Avoid race condition with uninitialized requests
This patch should fix the bug reported in
https://lkml.org/lkml/2014/9/11/249.
We have to initialize at least the atomic_flags and the cmd_flags when
allocating storage for the requests.
Otherwise blk_mq_timeout_check() might dereference uninitialized
pointers when racing with the creation of a request.
Also move the reset of cmd_flags for the initializing code to the point
where a request is freed. So we will never end up with pending flush
request indicators that might trigger dereferences of invalid pointers
in blk_mq_timeout_check().
Cc: stable@vger.kernel.org
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reported-by: Paulo De Rezende Pinatti <ppinatti@linux.vnet.ibm.com>
Tested-by: Paulo De Rezende Pinatti <ppinatti@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Tue, 16 Sep 2014 16:37:37 +0000 (10:37 -0600)]
blk-mq: request deadline must be visible before marking rq as started
When we start the request, we set the deadline and flip the bits
marking the request as started and non-complete. However, it's
important that the deadline store is ordered before flipping the
bits, otherwise we could have a small window where the request is
marked started but with an invalid deadline. This can confuse the
timeout handling.
Suggested-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Linus Torvalds [Tue, 16 Sep 2014 14:47:04 +0000 (07:47 -0700)]
Merge tag 'gfs2-fixes' of git://git./linux/kernel/git/steve/gfs2-3.0-fixes
Pull gfs2 fixes from Steven Whitehouse:
"Here are a number of small fixes for GFS2.
There is a fix for FIEMAP on large sparse files, a negative dentry
hashing fix, a fix for flock, and a bug fix relating to d_splice_alias
usage.
There are also (patches 1 and 5) a couple of updates which are less
critical, but small and low risk"
* tag 'gfs2-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-3.0-fixes:
GFS2: fix d_splice_alias() misuses
GFS2: Don't use MAXQUOTAS value
GFS2: Hash the negative dentry during inode lookup
GFS2: Request demote when a "try" flock fails
GFS2: Change maxlen variables to size_t
GFS2: fs/gfs2/super.c: replace seq_printf by seq_puts
James Hogan [Tue, 16 Sep 2014 12:07:35 +0000 (13:07 +0100)]
vfs: workaround gcc <4.6 build error in link_path_walk()
Commit
d6bb3e9075bb ("vfs: simplify and shrink stack frame of
link_path_walk()") introduced build problems with GCC versions older
than 4.6 due to the initialisation of a member of an anonymous union in
struct qstr without enclosing braces.
This hits GCC bug 10676 [1] (which was fixed in GCC 4.6 by [2]), and
causes the following build error:
fs/namei.c: In function 'link_path_walk':
fs/namei.c:1778: error: unknown field 'hash_len' specified in initializer
This is worked around by adding explicit braces.
[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=10676
[2] https://gcc.gnu.org/viewcvs/gcc?view=revision&revision=159206
Fixes: d6bb3e9075bb (vfs: simplify and shrink stack frame of link_path_walk())
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-metag@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Tue, 16 Sep 2014 03:35:32 +0000 (20:35 -0700)]
Merge tag 'fixes-for-linus' of git://git./linux/kernel/git/rusty/linux
Pull virtio fixes from Rusty Russell:
"virtio-rng corner case fixes, with cc:stable.
Survived a few days in linux-next"
* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux:
virtio-rng: skip reading when we start to remove the device
virtio-rng: fix stuck of hot-unplugging busy device
Linus Torvalds [Mon, 15 Sep 2014 23:20:56 +0000 (16:20 -0700)]
Merge tag 'regmap-v3.17-rc5' of git://git./linux/kernel/git/broonie/regmap
Pull regmap fix from Mark Brown:
"Fix registers file in debugfs
Ensure that the mode reported for the registers file in debugfs is
accurate by marking it as read only when the define to enable writes
has not been set. This is on the edge of being a bug fix but it's
debugfs and it makes it much easier for users to spot what's going
wrong when they forget to enable writeability"
* tag 'regmap-v3.17-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
regmap: Fix debugfs-file 'registers' mode
Linus Torvalds [Mon, 15 Sep 2014 22:12:01 +0000 (15:12 -0700)]
Merge branch 'for-linus' of git://git./linux/kernel/git/dtor/input
Pull input updates from Dmitry Torokhov:
"A few quirks for i8042/AT keyboards and a small device tree doc fix
for Atmel Touchscreens"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input:
Input: atmel_mxt_ts - fix merge in DT documentation
Input: i8042 - also set the firmware id for MUXed ports
Input: i8042 - add nomux quirk for Avatar AVIU-145A6
Input: i8042 - add Fujitsu U574 to no_timeout dmi table
Input: atkbd - do not try 'deactivate' keyboard on any LG laptops
Tony Luck [Mon, 15 Sep 2014 16:32:23 +0000 (09:32 -0700)]
ia64: Fix syscall number for memfd_create
Cut & paste typo from the line above.
Reported-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Mon, 15 Sep 2014 17:51:07 +0000 (10:51 -0700)]
vfs: simplify and shrink stack frame of link_path_walk()
Commit
9226b5b440f2 ("vfs: avoid non-forwarding large load after small
store in path lookup") made link_path_walk() always access the
"hash_len" field as a single 64-bit entity, in order to avoid mixed size
accesses to the members.
However, what I didn't notice was that that effectively means that the
whole "struct qstr this" is now basically redundant. We already
explicitly track the "const char *name", and if we just use "u64
hash_len" instead of "long len", there is nothing else left of the
"struct qstr".
We do end up wanting the "struct qstr" if we have a filesystem with a
"d_hash()" function, but that's a rare case, and we might as well then
just squirrell away the name and hash_len at that point.
End result: fewer live variables in the loop, a smaller stack frame, and
better code generation. And we don't need to pass in pointers variables
to helper functions any more, because the return value contains all the
relevant information. So this removes more lines than it adds, and the
source code is clearer too.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Mon, 15 Sep 2014 14:23:21 +0000 (07:23 -0700)]
Merge git://git./linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
"This fixes the newly added drbg generator so that it actually works on
32-bit machines. Previously the code was only tested on 64-bit and on
32-bit it overflowed and simply doesn't work"
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: drbg - remove check for uninitialized DRBG handle
crypto: drbg - backport "fix maximum value checks on 32 bit systems"
Linus Torvalds [Mon, 15 Sep 2014 00:50:12 +0000 (17:50 -0700)]
Linux 3.17-rc5
Linus Torvalds [Mon, 15 Sep 2014 00:37:36 +0000 (17:37 -0700)]
Merge branch 'for-linus' of git://git./linux/kernel/git/viro/vfs
Pull vfs fixes from Al Viro:
"double iput() on failure exit in lustre, racy removal of spliced
dentries from ->s_anon in __d_materialise_dentry() plus a bunch of
assorted RCU pathwalk fixes"
The RCU pathwalk fixes end up fixing a couple of cases where we
incorrectly dropped out of RCU walking, due to incorrect initialization
and testing of the sequence locks in some corner cases. Since dropping
out of RCU walk mode forces the slow locked accesses, those corner cases
slowed down quite dramatically.
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
be careful with nd->inode in path_init() and follow_dotdot_rcu()
don't bugger nd->seq on set_root_rcu() from follow_dotdot_rcu()
fix bogus read_seqretry() checks introduced in
b37199e
move the call of __d_drop(anon) into __d_materialise_unique(dentry, anon)
[fix] lustre: d_make_root() does iput() on dentry allocation failure
Linus Torvalds [Mon, 15 Sep 2014 00:28:32 +0000 (17:28 -0700)]
vfs: avoid non-forwarding large load after small store in path lookup
The performance regression that Josef Bacik reported in the pathname
lookup (see commit
99d263d4c5b2 "vfs: fix bad hashing of dentries") made
me look at performance stability of the dcache code, just to verify that
the problem was actually fixed. That turned up a few other problems in
this area.
There are a few cases where we exit RCU lookup mode and go to the slow
serializing case when we shouldn't, Al has fixed those and they'll come
in with the next VFS pull.
But my performance verification also shows that link_path_walk() turns
out to have a very unfortunate 32-bit store of the length and hash of
the name we look up, followed by a 64-bit read of the combined hash_len
field. That screws up the processor store to load forwarding, causing
an unnecessary hickup in this critical routine.
It's caused by the ugly calling convention for the "hash_name()"
function, and easily fixed by just making hash_name() fill in the whole
'struct qstr' rather than passing it a pointer to just the hash value.
With that, the profile for this function looks much smoother.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Sun, 14 Sep 2014 19:28:08 +0000 (12:28 -0700)]
Merge branch 'parisc-3.17-1' of git://git./linux/kernel/git/deller/parisc-linux
Pull parisc updates from Helge Deller:
"The most important patch is a new Light Weigth Syscall (LWS) for 8,
16, 32 and 64 bit atomic CAS operations which is required in order to
be able to implement the atomic gcc builtins on our platform.
Other than that, we wire up the seccomp, getrandom and memfd_create
syscalls, fixes a minor off-by-one bug and a wrong printk string"
* 'parisc-3.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
parisc: Implement new LWS CAS supporting 64 bit operations.
parisc: Wire up seccomp, getrandom and memfd_create syscalls
parisc: dino: fix %d confusingly prefixed with 0x in format string
parisc: sys_hpux: NUL terminator is one past the end
Al Viro [Sun, 14 Sep 2014 01:59:43 +0000 (21:59 -0400)]
be careful with nd->inode in path_init() and follow_dotdot_rcu()
in the former we simply check if dentry is still valid after picking
its ->d_inode; in the latter we fetch ->d_inode in the same places
where we fetch dentry and its ->d_seq, under the same checks.
Cc: stable@vger.kernel.org # 2.6.38+
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Al Viro [Sun, 14 Sep 2014 01:55:46 +0000 (21:55 -0400)]
don't bugger nd->seq on set_root_rcu() from follow_dotdot_rcu()
return the value instead, and have path_init() do the assignment. Broken by
"vfs: Fix absolute RCU path walk failures due to uninitialized seq number",
which was Cc-stable with 2.6.38+ as destination. This one should go where
it went.
To avoid dummy value returned in case when root is already set (it would do
no harm, actually, since the only caller that doesn't ignore the return value
is guaranteed to have nd->root *not* set, but it's more obvious that way),
lift the check into callers. And do the same to set_root(), to keep them
in sync.
Cc: stable@vger.kernel.org # 2.6.38+
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Linus Torvalds [Sun, 14 Sep 2014 17:54:12 +0000 (10:54 -0700)]
Merge tag 'ntb-3.17' of git://github.com/jonmason/ntb
Pull ntb driver bugfixes from Jon Mason:
"NTB driver fixes for queue spread and buffer alignment. Also, update
to MAINTAINERS to reflect new e-mail address"
* tag 'ntb-3.17' of git://github.com/jonmason/ntb:
ntb: Add alignment check to meet hardware requirement
MAINTAINERS: update NTB info
NTB: correct the spread of queues over mw's
Linus Torvalds [Sun, 14 Sep 2014 17:37:10 +0000 (10:37 -0700)]
Merge branch 'irq-urgent-for-linus' of git://git./linux/kernel/git/tip/tip
Pull ARM irq chip fixes from Thomas Gleixner:
"Another pile of ARM specific irq chip fixlets:
- off by one bugs in the crossbar driver
- missing annotations
- a bunch of "make it compile" updates
I pulled the lot today from Jason, but it has been in -next for at
least a week"
* 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
irqchip: gic-v3: Declare rdist as __percpu pointer to __iomem pointer
irqchip: gic: Make gic_default_routable_irq_domain_ops static
irqchip: exynos-combiner: Fix compilation error on ARM64
irqchip: crossbar: Off by one bugs in init
irqchip: gic-v3: Tag all low level accessors __maybe_unused
irqchip: gic-v3: Only define gic_peek_irq() when building SMP
Thomas Gleixner [Sun, 14 Sep 2014 13:19:19 +0000 (15:19 +0200)]
Merge tag 'irqchip-urgent-3.17' of git://git.infradead.org/users/jcooper/linux into irq/urgent
irqchip fixes for v3.17 from Jason Cooper
- GIC/GICV3: Various fixlets
- crossbar: Fix off-by-one bug
- exynos-combiner: Fix arm64 build error
Dave Jiang [Thu, 28 Aug 2014 20:53:02 +0000 (13:53 -0700)]
ntb: Add alignment check to meet hardware requirement
The NTB translate register must have the value to be BAR size aligned.
This alignment check make sure that the DMA memory allocated has the
proper alignment. Another requirement for NTB to function properly with
memory window BAR size greater or equal to 4M is to use the CMA feature
in 3.16 kernel with the appropriate CONFIG_CMA_ALIGNMENT and
CONFIG_CMA_SIZE_MBYTES set.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Jon Mason [Thu, 19 Jun 2014 17:44:24 +0000 (10:44 -0700)]
MAINTAINERS: update NTB info
Update my contact info to my personal email address and add Dave Jiang.
Signed-off-by: Jon Mason <jon.mason@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Jon Mason [Thu, 19 Jun 2014 17:11:13 +0000 (10:11 -0700)]
NTB: correct the spread of queues over mw's
The detection of an uneven number of queues on the given memory windows
was not correct. The mw_num is zero based and the mod should be
division to spread them evenly over the mw's.
Signed-off-by: Jon Mason <jon.mason@intel.com>
Al Viro [Sun, 14 Sep 2014 01:50:45 +0000 (21:50 -0400)]
fix bogus read_seqretry() checks introduced in
b37199e
read_seqretry() returns true on mismatch, not on match...
Cc: stable@vger.kernel.org # 3.15+
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Al Viro [Thu, 11 Sep 2014 22:55:50 +0000 (18:55 -0400)]
move the call of __d_drop(anon) into __d_materialise_unique(dentry, anon)
and lock the right list there
Cc: stable@vger.kernel.org
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Al Viro [Wed, 3 Sep 2014 17:11:09 +0000 (13:11 -0400)]
[fix] lustre: d_make_root() does iput() on dentry allocation failure
double-free is a bad thing
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Linus Torvalds [Sat, 13 Sep 2014 21:22:12 +0000 (14:22 -0700)]
Merge branches 'locking-urgent-for-linus' and 'timers-urgent-for-linus' of git://git./linux/kernel/git/tip/tip
Pull futex and timer fixes from Thomas Gleixner:
"A oneliner bugfix for the jinxed futex code:
- Drop hash bucket lock in the error exit path. I really could slap
myself for intruducing that bug while fixing all the other horror
in that code three month ago ...
and the timer department is not too proud about the following fixes:
- Deal with a long standing rounding bug in the timeval to jiffies
conversion. It's a real issue and this fix fell through the cracks
for quite some time.
- Another round of alarmtimer fixes. Finally this code gets used
more widely and the subtle issues hidden for quite some time are
noticed and fixed. Nothing really exciting, just the itty bitty
details which bite the serious users here and there"
* 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
futex: Unlock hb->lock in futex_wait_requeue_pi() error path
* 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
alarmtimer: Lock k_itimer during timer callback
alarmtimer: Do not signal SIGEV_NONE timers
alarmtimer: Return relative times in timer_gettime
jiffies: Fix timeval conversion to jiffies
Guy Martin [Fri, 12 Sep 2014 16:02:34 +0000 (18:02 +0200)]
parisc: Implement new LWS CAS supporting 64 bit operations.
The current LWS cas only works correctly for 32bit. The new LWS allows
for CAS operations of variable size.
Signed-off-by: Guy Martin <gmsoft@tuxicoman.be>
Cc: <stable@vger.kernel.org> # 3.13+
Signed-off-by: Helge Deller <deller@gmx.de>
Linus Torvalds [Sat, 13 Sep 2014 18:30:10 +0000 (11:30 -0700)]
vfs: fix bad hashing of dentries
Josef Bacik found a performance regression between 3.2 and 3.10 and
narrowed it down to commit
bfcfaa77bdf0 ("vfs: use 'unsigned long'
accesses for dcache name comparison and hashing"). He reports:
"The test case is essentially
for (i = 0; i <
1000000; i++)
mkdir("a$i");
On xfs on a fio card this goes at about 20k dir/sec with 3.2, and 12k
dir/sec with 3.10. This is because we spend waaaaay more time in
__d_lookup on 3.10 than in 3.2.
The new hashing function for strings is suboptimal for <
sizeof(unsigned long) string names (and hell even > sizeof(unsigned
long) string names that I've tested). I broke out the old hashing
function and the new one into a userspace helper to get real numbers
and this is what I'm getting:
Old hash table had
1000000 entries, 0 dupes, 0 max dupes
New hash table had 12628 entries, 987372 dupes, 900 max dupes
We had 11400 buckets with a p50 of 30 dupes, p90 of 240 dupes, p99 of 567 dupes for the new hash
My test does the hash, and then does the d_hash into a integer pointer
array the same size as the dentry hash table on my system, and then
just increments the value at the address we got to see how many
entries we overlap with.
As you can see the old hash function ended up with all 1 million
entries in their own bucket, whereas the new one they are only
distributed among ~12.5k buckets, which is why we're using so much
more CPU in __d_lookup".
The reason for this hash regression is two-fold:
- On 64-bit architectures the down-mixing of the original 64-bit
word-at-a-time hash into the final 32-bit hash value is very
simplistic and suboptimal, and just adds the two 32-bit parts
together.
In particular, because there is no bit shuffling and the mixing
boundary is also a byte boundary, similar character patterns in the
low and high word easily end up just canceling each other out.
- the old byte-at-a-time hash mixed each byte into the final hash as it
hashed the path component name, resulting in the low bits of the hash
generally being a good source of hash data. That is not true for the
word-at-a-time case, and the hash data is distributed among all the
bits.
The fix is the same in both cases: do a better job of mixing the bits up
and using as much of the hash data as possible. We already have the
"hash_32|64()" functions to do that.
Reported-by: Josef Bacik <jbacik@fb.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Chris Mason <clm@fb.com>
Cc: linux-fsdevel@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Sat, 13 Sep 2014 18:24:03 +0000 (11:24 -0700)]
Make hash_64() use a 64-bit multiply when appropriate
The hash_64() function historically does the multiply by the
GOLDEN_RATIO_PRIME_64 number with explicit shifts and adds, because
unlike the 32-bit case, gcc seems unable to turn the constant multiply
into the more appropriate shift and adds when required.
However, that means that we generate those shifts and adds even when the
architecture has a fast multiplier, and could just do it better in
hardware.
Use the now-cleaned-up CONFIG_ARCH_HAS_FAST_MULTIPLIER (together with
"is it a 64-bit architecture") to decide whether to use an integer
multiply or the explicit sequence of shift/add instructions.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Sat, 13 Sep 2014 18:14:53 +0000 (11:14 -0700)]
Make ARCH_HAS_FAST_MULTIPLIER a real config variable
It used to be an ad-hoc hack defined by the x86 version of
<asm/bitops.h> that enabled a couple of library routines to know whether
an integer multiply is faster than repeated shifts and additions.
This just makes it use the real Kconfig system instead, and makes x86
(which was the only architecture that did this) select the option.
NOTE! Even for x86, this really is kind of wrong. If we cared, we would
probably not enable this for builds optimized for netburst (P4), where
shifts-and-adds are generally faster than multiplies. This patch does
*not* change that kind of logic, though, it is purely a syntactic change
with no code changes.
This was triggered by the fact that we have other places that really
want to know "do I want to expand multiples by constants by hand or
not", particularly the hash generation code.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Sat, 13 Sep 2014 17:04:10 +0000 (10:04 -0700)]
Merge tag 'dm-3.17-fix2' of git://git./linux/kernel/git/device-mapper/linux-dm
Pull device mapper fix from Mike Snitzer:
"Fix a race in the DM cache target that caused dirty blocks to be
marked as clean. This could cause no writeback to occur or spurious
dirty block counts"
* tag 'dm-3.17-fix2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
dm cache: fix race causing dirty blocks to be marked as clean
Linus Torvalds [Sat, 13 Sep 2014 16:39:55 +0000 (09:39 -0700)]
Merge branch 'for-linus' of git://git.kernel.dk/linux-block
Pull block fixes from Jens Axboe:
"A small collection of fixes for the current rc series. This contains:
- Two small blk-mq patches from Rob Elliott, cleaning up error case
at init time.
- A fix from Ming Lei, fixing SG merging for blk-mq where
QUEUE_FLAG_SG_NO_MERGE is the default.
- A dev_t minor lifetime fix from Keith, fixing an issue where a
minor might be reused before all references to it were gone.
- Fix from Alan Stern where an unbalanced queue bypass caused SCSI
some headaches when it does a series of add/del on devices without
fully registrering the queue.
- A fix from me for improving the scaling of tag depth in blk-mq if
we are short on memory"
* 'for-linus' of git://git.kernel.dk/linux-block:
blk-mq: scale depth and rq map appropriate if low on memory
Block: fix unbalanced bypass-disable in blk_register_queue
block: Fix dev_t minor allocation lifetime
blk-mq: cleanup after blk_mq_init_rq_map failures
blk-mq: pass along blk_mq_alloc_tag_set return values
blk-merge: fix blk_recount_segments
Linus Torvalds [Sat, 13 Sep 2014 00:45:27 +0000 (17:45 -0700)]
Merge tag 'stable/for-linus-3.17-b-rc4-arm-tag' of git://git./linux/kernel/git/xen/tip
Pull Xen ARM bugfix from Stefano Stabellini:
"The patches fix the "xen_add_mach_to_phys_entry: cannot add" bug that
has been affecting xen on arm and arm64 guests since 3.16. They
require a few hypervisor side changes that just went in xen-unstable.
A couple of days ago David sent out a pull request with a few other
Xen fixes (it is already in master). Sorry we didn't synchronized
better among us"
* tag 'stable/for-linus-3.17-b-rc4-arm-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
xen/arm: remove mach_to_phys rbtree
xen/arm: reimplement xen_dma_unmap_page & friends
xen/arm: introduce XENFEAT_grant_map_identity
Richard Larocque [Wed, 10 Sep 2014 01:31:05 +0000 (18:31 -0700)]
alarmtimer: Lock k_itimer during timer callback
Locks the k_itimer's it_lock member when handling the alarm timer's
expiry callback.
The regular posix timers defined in posix-timers.c have this lock held
during timout processing because their callbacks are routed through
posix_timer_fn(). The alarm timers follow a different path, so they
ought to grab the lock somewhere else.
Cc: stable@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Sharvil Nanavati <sharvil@google.com>
Signed-off-by: Richard Larocque <rlarocque@google.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Richard Larocque [Wed, 10 Sep 2014 01:31:04 +0000 (18:31 -0700)]
alarmtimer: Do not signal SIGEV_NONE timers
Avoids sending a signal to alarm timers created with sigev_notify set to
SIGEV_NONE by checking for that special case in the timeout callback.
The regular posix timers avoid sending signals to SIGEV_NONE timers by
not scheduling any callbacks for them in the first place. Although it
would be possible to do something similar for alarm timers, it's simpler
to handle this as a special case in the timeout.
Prior to this patch, the alarm timer would ignore the sigev_notify value
and try to deliver signals to the process anyway. Even worse, the
sanity check for the value of sigev_signo is skipped when SIGEV_NONE was
specified, so the signal number could be bogus. If sigev_signo was an
unitialized value (as it often would be if SIGEV_NONE is used), then
it's hard to predict which signal will be sent.
Cc: stable@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Sharvil Nanavati <sharvil@google.com>
Signed-off-by: Richard Larocque <rlarocque@google.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Richard Larocque [Wed, 10 Sep 2014 01:31:03 +0000 (18:31 -0700)]
alarmtimer: Return relative times in timer_gettime
Returns the time remaining for an alarm timer, rather than the time at
which it is scheduled to expire. If the timer has already expired or it
is not currently scheduled, the it_value's members are set to zero.
This new behavior matches that of the other posix-timers and the POSIX
specifications.
This is a change in user-visible behavior, and may break existing
applications. Hopefully, few users rely on the old incorrect behavior.
Cc: stable@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Sharvil Nanavati <sharvil@google.com>
Signed-off-by: Richard Larocque <rlarocque@google.com>
[jstultz: minor style tweak]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Andrew Hunter [Thu, 4 Sep 2014 21:17:16 +0000 (14:17 -0700)]
jiffies: Fix timeval conversion to jiffies
timeval_to_jiffies tried to round a timeval up to an integral number
of jiffies, but the logic for doing so was incorrect: intervals
corresponding to exactly N jiffies would become N+1. This manifested
itself particularly repeatedly stopping/starting an itimer:
setitimer(ITIMER_PROF, &val, NULL);
setitimer(ITIMER_PROF, NULL, &val);
would add a full tick to val, _even if it was exactly representable in
terms of jiffies_ (say, the result of a previous rounding.) Doing
this repeatedly would cause unbounded growth in val. So fix the math.
Here's what was wrong with the conversion: we essentially computed
(eliding seconds)
jiffies = usec * (NSEC_PER_USEC/TICK_NSEC)
by using scaling arithmetic, which took the best approximation of
NSEC_PER_USEC/TICK_NSEC with denominator of 2^USEC_JIFFIE_SC =
x/(2^USEC_JIFFIE_SC), and computed:
jiffies = (usec * x) >> USEC_JIFFIE_SC
and rounded this calculation up in the intermediate form (since we
can't necessarily exactly represent TICK_NSEC in usec.) But the
scaling arithmetic is a (very slight) *over*approximation of the true
value; that is, instead of dividing by (1 usec/ 1 jiffie), we
effectively divided by (1 usec/1 jiffie)-epsilon (rounding
down). This would normally be fine, but we want to round timeouts up,
and we did so by adding 2^USEC_JIFFIE_SC - 1 before the shift; this
would be fine if our division was exact, but dividing this by the
slightly smaller factor was equivalent to adding just _over_ 1 to the
final result (instead of just _under_ 1, as desired.)
In particular, with HZ=1000, we consistently computed that 10000 usec
was 11 jiffies; the same was true for any exact multiple of
TICK_NSEC.
We could possibly still round in the intermediate form, adding
something less than 2^USEC_JIFFIE_SC - 1, but easier still is to
convert usec->nsec, round in nanoseconds, and then convert using
time*spec*_to_jiffies. This adds one constant multiplication, and is
not observably slower in microbenchmarks on recent x86 hardware.
Tested: the following program:
int main() {
struct itimerval zero = {{0, 0}, {0, 0}};
/* Initially set to 10 ms. */
struct itimerval initial = zero;
initial.it_interval.tv_usec = 10000;
setitimer(ITIMER_PROF, &initial, NULL);
/* Save and restore several times. */
for (size_t i = 0; i < 10; ++i) {
struct itimerval prev;
setitimer(ITIMER_PROF, &zero, &prev);
/* on old kernels, this goes up by TICK_USEC every iteration */
printf("previous value: %ld %ld %ld %ld\n",
prev.it_interval.tv_sec, prev.it_interval.tv_usec,
prev.it_value.tv_sec, prev.it_value.tv_usec);
setitimer(ITIMER_PROF, &prev, NULL);
}
return 0;
}
Cc: stable@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Paul Turner <pjt@google.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Reviewed-by: Paul Turner <pjt@google.com>
Reported-by: Aaron Jacobs <jacobsa@google.com>
Signed-off-by: Andrew Hunter <ahh@google.com>
[jstultz: Tweaked to apply to 3.17-rc]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Thomas Gleixner [Thu, 11 Sep 2014 21:44:35 +0000 (23:44 +0200)]
futex: Unlock hb->lock in futex_wait_requeue_pi() error path
futex_wait_requeue_pi() calls futex_wait_setup(). If
futex_wait_setup() succeeds it returns with hb->lock held and
preemption disabled. Now the sanity check after this does:
if (match_futex(&q.key, &key2)) {
ret = -EINVAL;
goto out_put_keys;
}
which releases the keys but does not release hb->lock.
So we happily return to user space with hb->lock held and therefor
preemption disabled.
Unlock hb->lock before taking the exit route.
Reported-by: Dave "Trinity" Jones <davej@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Darren Hart <dvhart@linux.intel.com>
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1409112318500.4178@nanos
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Al Viro [Fri, 12 Sep 2014 19:56:04 +0000 (20:56 +0100)]
GFS2: fix d_splice_alias() misuses
Callers of d_splice_alias(dentry, inode) don't need iput(), neither
on success nor on failure. Either the reference to inode is stored
in a previously negative dentry, or it's dropped. In either case
inode reference the caller used to hold is consumed.
__gfs2_lookup() does iput() in case when d_splice_alias() has failed.
Double iput() if we ever hit that. And gfs2_create_inode() ends up
not only with double iput(), but with link count dropped to zero - on
an inode it has just found in directory.
Cc: stable@vger.kernel.org # v3.14+
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Linus Torvalds [Fri, 12 Sep 2014 19:00:49 +0000 (12:00 -0700)]
Merge tag 'char-misc-3.17-rc5' of git://git./linux/kernel/git/gregkh/char-misc
Pull char/misc driver fix from Greg KH:
"Here is one misc driver fix for 3.17-rc5. It resolves a kernel oops
that can happen in the lattice FPGA driver if the firmware isn't
present on the system.
It's been in the linux-next tree for a while now"
* tag 'char-misc-3.17-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc:
Lattice ECP3 FPGA: Check firmware pointer
Linus Torvalds [Fri, 12 Sep 2014 19:00:18 +0000 (12:00 -0700)]
Merge tag 'staging-3.17-rc5' of git://git./linux/kernel/git/gregkh/staging
Pull staging driver fixes from Greg KH:
"Here are 3 tiny staging driver fixes for 3.17-rc5.
Two are fixes for the imx-drm driver, resolving issues that have been
reported. The other is a memory leak fix for the Android sync driver,
due to changes that went into 3.17-rc1.
All have been in linux-next for a while"
* tag 'staging-3.17-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging:
android: fix reference leak in sync_fence_create
imx-drm: imx-ldb: fix NULL pointer in imx_ldb_unbind()
imx-drm: ipuv3-plane: fix ipu_plane_dpms()