firefly-linux-kernel-4.4.55.git
10 years agoblock: add queue flag for disabling SG merging
Jens Axboe [Thu, 29 May 2014 15:53:32 +0000 (09:53 -0600)]
block: add queue flag for disabling SG merging

If devices are not SG starved, we waste a lot of time potentially
collapsing SG segments. Enough that 1.5% of the CPU time goes
to this, at only 400K IOPS. Add a queue flag, QUEUE_FLAG_NO_SG_MERGE,
which just returns the number of vectors in a bio instead of looping
over all segments and checking for collapsible ones.

Add a BLK_MQ_F_SG_MERGE flag so that drivers can opt-in on the sg
merging, if they so desire.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: remove 'magic' from struct blk_plug
Jens Axboe [Thu, 29 May 2014 14:09:00 +0000 (08:09 -0600)]
block: remove 'magic' from struct blk_plug

I don't think we've ever caught any bugs with this, and there's the
list poisoning for the plug lists to catch uninitialized cases.
So remove the magic member and save 8 bytes in the struct.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: remove alloc_hctx and free_hctx methods
Christoph Hellwig [Wed, 28 May 2014 16:11:06 +0000 (18:11 +0200)]
blk-mq: remove alloc_hctx and free_hctx methods

There is no need for drivers to control hardware context allocation
now that we do the context to node mapping in common code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: add file comments and update copyright notices
Jens Axboe [Wed, 28 May 2014 16:15:41 +0000 (10:15 -0600)]
blk-mq: add file comments and update copyright notices

None of the blk-mq files have an explanatory comment at the top
for what that particular file does. Add that and add appropriate
copyright notices as well.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: remove blk_mq_alloc_request_pinned
Christoph Hellwig [Tue, 27 May 2014 18:59:50 +0000 (20:59 +0200)]
blk-mq: remove blk_mq_alloc_request_pinned

We now only have one caller left and can open code it there in a cleaner
way.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: do not use blk_mq_alloc_request_pinned in blk_mq_map_request
Christoph Hellwig [Tue, 27 May 2014 18:59:49 +0000 (20:59 +0200)]
blk-mq: do not use blk_mq_alloc_request_pinned in blk_mq_map_request

We already do a non-blocking allocation in blk_mq_map_request, no need
to repeat it.  Just call __blk_mq_alloc_request to wait directly.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: remove blk_mq_wait_for_tags
Christoph Hellwig [Tue, 27 May 2014 18:59:48 +0000 (20:59 +0200)]
blk-mq: remove blk_mq_wait_for_tags

The current logic for blocking tag allocation is rather confusing, as we
first allocated and then free again a tag in blk_mq_wait_for_tags, just
to attempt a non-blocking allocation and then repeat if someone else
managed to grab the tag before us.

Instead change blk_mq_alloc_request_pinned to simply do a blocking tag
allocation itself and use the request we get back from it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: initialize request in __blk_mq_alloc_request
Christoph Hellwig [Tue, 27 May 2014 18:59:47 +0000 (20:59 +0200)]
blk-mq: initialize request in __blk_mq_alloc_request

Both callers if __blk_mq_alloc_request want to initialize the request, so
lift it into the common path.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: merge blk_mq_alloc_reserved_request into blk_mq_alloc_request
Christoph Hellwig [Tue, 27 May 2014 18:59:46 +0000 (20:59 +0200)]
blk-mq: merge blk_mq_alloc_reserved_request into blk_mq_alloc_request

Instead of having two almost identical copies of the same code just let
the callers pass in the reserved flag directly.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: add helper to insert requests from irq context
Christoph Hellwig [Wed, 28 May 2014 14:08:02 +0000 (08:08 -0600)]
blk-mq: add helper to insert requests from irq context

Both the cache flush state machine and the SCSI midlayer want to submit
requests from irq context, and the current per-request requeue_work
unfortunately causes corruption due to sharing with the csd field for
flushes.  Replace them with a per-request_queue list of requests to
be requeued.

Based on an earlier test by Ming Lei.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Ming Lei <tom.leiming@gmail.com>
Tested-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: remove stale comment for blk_mq_complete_request()
Jens Axboe [Wed, 28 May 2014 14:06:34 +0000 (08:06 -0600)]
blk-mq: remove stale comment for blk_mq_complete_request()

It works for both IPI and local completions as of commit
95f096849932.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: allow non-softirq completions
Jens Axboe [Tue, 27 May 2014 23:46:48 +0000 (17:46 -0600)]
blk-mq: allow non-softirq completions

Right now we export two ways of completing a request:

1) blk_mq_complete_request(). This uses an IPI (if needed) and
   completes through q->softirq_done_fn(). It also works with
   timeouts.

2) blk_mq_end_io(). This completes inline, and ignores any timeout
   state of the request.

Let blk_mq_complete_request() handle non-softirq_done_fn completions
as well, by just completing inline. If a driver has enough completion
ports to place completions correctly, it need not define a
mq_ops->complete() and we can avoid an indirect function call by
doing the completion inline.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: pass in suggested NUMA node to ->alloc_hctx()
Jens Axboe [Tue, 27 May 2014 18:06:53 +0000 (12:06 -0600)]
blk-mq: pass in suggested NUMA node to ->alloc_hctx()

Drivers currently have to figure this out on their own, and they
are missing information to do it properly. The ones that did
attempt to do it, do it wrong.

So just pass in the suggested node directly to the alloc
function.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: only allocate/free mq_usage_counter in blk-mq
Ming Lei [Tue, 27 May 2014 15:35:14 +0000 (23:35 +0800)]
block: only allocate/free mq_usage_counter in blk-mq

The percpu counter is only used for blk-mq, so move
its allocation and free inside blk-mq, and don't
allocate it for legacy queue device.

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: avoid code duplication
Ming Lei [Tue, 27 May 2014 15:35:13 +0000 (23:35 +0800)]
blk-mq: avoid code duplication

blk_mq_exit_hw_queues() and blk_mq_free_hw_queues()
are introduced to avoid code duplication.

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: fix leak of hctx->ctx_map
Ming Lei [Tue, 27 May 2014 14:34:45 +0000 (08:34 -0600)]
blk-mq: fix leak of hctx->ctx_map

hctx->ctx_map should have been freed inside blk_mq_free_queue().

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock/blk-lib.c: make __blkdev_issue_zeroout static
Fabian Frederick [Mon, 26 May 2014 20:19:14 +0000 (22:19 +0200)]
block/blk-lib.c: make __blkdev_issue_zeroout static

__blkdev_issue_zeroout is only used in blk-lib.c

Cc: Jens Axboe <axboe@kernel.dk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: idle all hardware contexts before freeing a queue
Christoph Hellwig [Mon, 26 May 2014 09:45:02 +0000 (11:45 +0200)]
blk-mq: idle all hardware contexts before freeing a queue

Without this we can leak the active_queues reference if a command is
freed while it is considered active.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: allow setting of per-request timeouts
Jens Axboe [Fri, 23 May 2014 20:14:57 +0000 (14:14 -0600)]
blk-mq: allow setting of per-request timeouts

Currently blk-mq uses the queue timeout for all requests. But
for some commands, drivers may want to set a specific timeout
for special requests. Allow this to be passed in through
request->timeout, and use it if set.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: export blk_mq_tag_busy_iter
Sam Bradshaw [Fri, 23 May 2014 19:30:16 +0000 (13:30 -0600)]
blk-mq: export blk_mq_tag_busy_iter

Export the blk-mq in-flight tag iterator for driver consumption.
This is particularly useful in exception paths or SRSI where
in-flight IOs need to be cancelled and/or reissued. The NVMe driver
conversion will use this.

Signed-off-by: Sam Bradshaw <sbradshaw@micron.com>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: split make request handler for multi and single queue
Jens Axboe [Thu, 22 May 2014 16:40:51 +0000 (10:40 -0600)]
blk-mq: split make request handler for multi and single queue

We want slightly different behavior from them:

- On single queue devices, we currently use the per-process plug
  for deferred IO and for merging.

- On multi queue devices, we don't use the per-process plug, but
  we want to go straight to hardware for SYNC IO.

Split blk_mq_make_request() into a blk_sq_make_request() for single
queue devices, and retain blk_mq_make_request() for multi queue
devices. Then we don't need multiple checks for q->nr_hw_queues
in the request mapping.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: save memory by freeing requests on unused hardware queues
Jens Axboe [Wed, 21 May 2014 20:01:15 +0000 (14:01 -0600)]
blk-mq: save memory by freeing requests on unused hardware queues

Depending on the topology of the machine and the number of queues
exposed by a device, we can end up in a situation where some of
the hardware queues are unused (as in, they don't map to any
software queues). For this case, free up the memory used by the
request map, as we will not use it. This can be a substantial
amount of memory, depending on the number of queues vs CPUs and
the queue depth of the device.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: allow the hctx cpu hotplug notifier to return errors
Jens Axboe [Wed, 21 May 2014 19:59:08 +0000 (13:59 -0600)]
blk-mq: allow the hctx cpu hotplug notifier to return errors

Prepare this for the next patch which adds more smarts in the
plugging logic, so that we can save some memory.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: Micro-optimize blk_queue_nomerges() check
Robert Elliott [Tue, 20 May 2014 21:46:26 +0000 (16:46 -0500)]
blk-mq: Micro-optimize blk_queue_nomerges() check

In blk_mq_make_request(), do the blk_queue_nomerges() check
outside the call to blk_attempt_plug_merge() to eliminate
function call overhead when nomerges=2 (disabled)

Signed-off-by: Robert Elliott <elliott@hp.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: initialize q->nr_requests after calling blk_queue_make_request()
Jens Axboe [Tue, 20 May 2014 21:17:27 +0000 (15:17 -0600)]
blk-mq: initialize q->nr_requests after calling blk_queue_make_request()

blk_queue_make_requests() overwrites our set value for q->nr_requests,
turning it into the default of 128. Set this appropriately after
initializing queue values in blk_queue_make_request().

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: allow changing of queue depth through sysfs
Jens Axboe [Tue, 20 May 2014 17:49:02 +0000 (11:49 -0600)]
blk-mq: allow changing of queue depth through sysfs

For request_fn based devices, the block layer exports a 'nr_requests'
file through sysfs to allow adjusting of queue depth on the fly.
Currently this returns -EINVAL for blk-mq, since it's not wired up.
Wire this up for blk-mq, so that it now also always dynamic
adjustments of the allowed queue depth for any given block device
managed by blk-mq.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agohtmldocs: fix bio.c location
Jens Axboe [Tue, 20 May 2014 14:17:35 +0000 (08:17 -0600)]
htmldocs: fix bio.c location

Commit f9c78b2be2ca moved bio.c from fs/ to block/, but didn't
update the docbook location. Fix that up.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: move mm/bounce.c to block/
Jens Axboe [Tue, 20 May 2014 02:01:52 +0000 (20:01 -0600)]
block: move mm/bounce.c to block/

Continue moving some of the block files that are scattered around.
bounce.c contains only code for bouncing the contents of a bio.
It's block proper code, not mm code.

Suggested-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoMerge branch 'for-3.16/blk-mq-tagging' into for-3.16/core
Jens Axboe [Mon, 19 May 2014 17:52:35 +0000 (11:52 -0600)]
Merge branch 'for-3.16/blk-mq-tagging' into for-3.16/core

Signed-off-by: Jens Axboe <axboe@fb.com>
Conflicts:
block/blk-mq-tag.c

10 years agoblk-mq: switch ctx pending map to the sparser blk_align_bitmap
Jens Axboe [Mon, 19 May 2014 15:23:55 +0000 (09:23 -0600)]
blk-mq: switch ctx pending map to the sparser blk_align_bitmap

Each hardware queue has a bitmap of software queues with pending
requests. When new IO is queued on a software queue, the bit is
set, and when IO is pruned on a hardware queue run, the bit is
cleared. This causes a lot of traffic. Switch this from the regular
BITS_PER_LONG bitmap to a sparser layout, similarly to what was
done for blk-mq tagging.

20% performance increase was observed for single threaded IO, and
about 15% performanc increase on multiple threads driving the
same device.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: move the cache friendly bitmap type of out blk-mq-tag
Jens Axboe [Mon, 19 May 2014 15:17:48 +0000 (09:17 -0600)]
blk-mq: move the cache friendly bitmap type of out blk-mq-tag

We will use it for the pending list in blk-mq core as well.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: move ioprio.c from fs/ to block/
Jens Axboe [Mon, 19 May 2014 17:02:18 +0000 (11:02 -0600)]
block: move ioprio.c from fs/ to block/

Like commit f9c78b2b, move this block related file outside
of fs/ and into the core block directory, block/.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: move bio.c and bio-integrity.c from fs/ to block/
Jens Axboe [Mon, 19 May 2014 14:16:41 +0000 (08:16 -0600)]
block: move bio.c and bio-integrity.c from fs/ to block/

They really belong in block/, especially now since it's not in
drivers/block/ anymore. Additionally, the get_maintainer script
gets it wrong when in fs/.

Suggested-by: Christoph Hellwig <hch@infradead.org>
Acked-by: Al Viro <viro@ZenIV.linux.org.uk>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: improve support for shared tags maps
Jens Axboe [Tue, 13 May 2014 21:10:52 +0000 (15:10 -0600)]
blk-mq: improve support for shared tags maps

This adds support for active queue tracking, meaning that the
blk-mq tagging maintains a count of active users of a tag set.
This allows us to maintain a notion of fairness between users,
so that we can distribute the tag depth evenly without starving
some users while allowing others to try unfair deep queues.

If sharing of a tag set is detected, each hardware queue will
track the depth of its own queue. And if this exceeds the total
depth divided by the number of active queues, the user is actively
throttled down.

The active queue count is done lazily to avoid bouncing that data
between submitter and completer. Each hardware queue gets marked
active when it allocates its first tag, and gets marked inactive
when 1) the last tag is cleared, and 2) the queue timeout grace
period has passed.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoMerge branch 'for-3.16/blk-mq-tagging' into for-3.16/core
Jens Axboe [Sat, 10 May 2014 21:44:42 +0000 (15:44 -0600)]
Merge branch 'for-3.16/blk-mq-tagging' into for-3.16/core

10 years agoblk-mq: bitmap tag: cleanup blk_mq_init_tags
Ming Lei [Sat, 10 May 2014 17:01:51 +0000 (01:01 +0800)]
blk-mq: bitmap tag: cleanup blk_mq_init_tags

Both nr_cache and nr_tags arn't needed for bitmap tag anymore.

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: bitmap tag: select random tag betweet 0 and (depth - 1)
Ming Lei [Sat, 10 May 2014 21:43:14 +0000 (15:43 -0600)]
blk-mq: bitmap tag: select random tag betweet 0 and (depth - 1)

The selected tag should be selected at random between 0 and
(depth - 1) with probability 1/depth, instead between 0 and
(depth - 2) with probability 1/(depth - 1).

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: bitmap tag: remove barrier in bt_clear_tag()
Ming Lei [Sat, 10 May 2014 17:01:49 +0000 (01:01 +0800)]
blk-mq: bitmap tag: remove barrier in bt_clear_tag()

The barrier isn't necessary because both atomic_dec_and_test()
and wake_up() implicate one barrier.

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: bitmap tag: use clear_bit_unlock in bt_clear_tag()
Ming Lei [Sat, 10 May 2014 17:01:48 +0000 (01:01 +0800)]
blk-mq: bitmap tag: use clear_bit_unlock in bt_clear_tag()

The unlock memory barrier need to order access to req in free
path and clearing tag bit, otherwise either request free path
may see a allocated request, or initialized request in allocate
path might be modified by the ongoing free path.

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: only calculate part_in_flight() once
Jens Axboe [Fri, 9 May 2014 21:48:23 +0000 (15:48 -0600)]
block: only calculate part_in_flight() once

We first check if we have inflight IO, then retrieve that
same number again. Usually this isn't that costly since the
chance of having the data dirtied in between is small, but
there's no reason for calling part_in_flight() twice.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: fix race in IO start accounting
Jens Axboe [Fri, 9 May 2014 20:54:08 +0000 (14:54 -0600)]
blk-mq: fix race in IO start accounting

Commit c6d600c6 opened up a small race where we could attempt to
account IO completion on a request, racing with IO start accounting.
Fix this up by ensuring that we've accounted for IO start before
inserting the request.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: use sparser tag layout for lower queue depth
Jens Axboe [Fri, 9 May 2014 19:41:15 +0000 (13:41 -0600)]
blk-mq: use sparser tag layout for lower queue depth

For best performance, spreading tags over multiple cachelines
makes the tagging more efficient on multicore systems. But since
we have 8 * sizeof(unsigned long) tags per cacheline, we don't
always get a nice spread.

Attempt to spread the tags over at least 4 cachelines, using fewer
number of bits per unsigned long if we have to. This improves
tagging performance in setups with 32-128 tags. For higher depths,
the spread is the same as before (BITS_PER_LONG tags per cacheline).

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: implement new and more efficient tagging scheme
Jens Axboe [Fri, 9 May 2014 15:36:49 +0000 (09:36 -0600)]
blk-mq: implement new and more efficient tagging scheme

blk-mq currently uses percpu_ida for tag allocation. But that only
works well if the ratio between tag space and number of CPUs is
sufficiently high. For most devices and systems, that is not the
case. The end result if that we either only utilize the tag space
partially, or we end up attempting to fully exhaust it and run
into lots of lock contention with stealing between CPUs. This is
not optimal.

This new tagging scheme is a hybrid bitmap allocator. It uses
two tricks to both be SMP friendly and allow full exhaustion
of the space:

1) We cache the last allocated (or freed) tag on a per blk-mq
   software context basis. This allows us to limit the space
   we have to search. The key element here is not caching it
   in the shared tag structure, otherwise we end up dirtying
   more shared cache lines on each allocate/free operation.

2) The tag space is split into cache line sized groups, and
   each context will start off randomly in that space. Even up
   to full utilization of the space, this divides the tag users
   efficiently into cache line groups, avoiding dirtying the same
   one both between allocators and between allocator and freeer.

This scheme shows drastically better behaviour, both on small
tag spaces but on large ones as well. It has been tested extensively
to show better performance for all the cases blk-mq cares about.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: initialize struct request fields individually
Christoph Hellwig [Tue, 6 May 2014 10:12:45 +0000 (12:12 +0200)]
blk-mq: initialize struct request fields individually

This allows us to avoid a non-atomic memset over ->atomic_flags as well
as killing lots of duplicate initializations.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: update a hotplug comment for grammar
Jens Axboe [Thu, 8 May 2014 20:50:19 +0000 (14:50 -0600)]
blk-mq: update a hotplug comment for grammar

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: add basic round-robin of what CPU to queue workqueue work on
Jens Axboe [Wed, 7 May 2014 16:26:44 +0000 (10:26 -0600)]
blk-mq: add basic round-robin of what CPU to queue workqueue work on

Right now we just pick the first CPU in the mask, but that can
easily overload that one. Add some basic batching and round-robin
all the entries in the mask instead.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock/blk-throttle.c: fix return of 0/1 with return type bool
Fabian Frederick [Fri, 2 May 2014 16:28:17 +0000 (18:28 +0200)]
block/blk-throttle.c: fix return of 0/1 with return type bool

Fix 4 coccinelle warnings.

Cc: Jens Axboe <axboe@kernel.dk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock/blk-iopoll.c: use iop instead of iopoll
Fabian Frederick [Fri, 2 May 2014 16:21:45 +0000 (18:21 +0200)]
block/blk-iopoll.c: use iop instead of iopoll

All blk_iopoll functions use iop for parent iopoll structure except
blk_iopoll_complete.This also fixes one kernel-doc warning.

Cc: Jens Axboe <axboe@kernel.dk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: remove extra requeue trace
Jens Axboe [Fri, 2 May 2014 17:24:48 +0000 (11:24 -0600)]
blk-mq: remove extra requeue trace

We already issue a blktrace requeue event in
__blk_mq_requeue_request(), don't do it from the original caller
as well.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: Fix format string mismatch in cfq-iosched.c
Masanari Iida [Mon, 28 Apr 2014 03:38:34 +0000 (12:38 +0900)]
block: Fix format string mismatch in cfq-iosched.c

Fix format string mismatch in cfq_var_show()

Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: refactor request insertion/merging
Jens Axboe [Wed, 30 Apr 2014 19:43:56 +0000 (13:43 -0600)]
blk-mq: refactor request insertion/merging

Refactor the logic around adding a new bio to a software queue,
so we nest the ctx->lock where we really need it (merge and
insertion) and don't hold it when we don't (init and IO start
accounting).

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq remove debug BUG_ON() when draining software queues
Jens Axboe [Wed, 30 Apr 2014 19:43:08 +0000 (13:43 -0600)]
blk-mq remove debug BUG_ON() when draining software queues

It's never been of any use, lets get rid of it.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: fix waiting for reserved tags
Jens Axboe [Wed, 30 Apr 2014 02:49:48 +0000 (20:49 -0600)]
blk-mq: fix waiting for reserved tags

blk_mq_wait_for_tags() is only able to wait for "normal" tags,
not reserved tags. Pass in which one we should attempt to get
a tag for, so that waiting for reserved tags will work.

Reserved tags are used for internal commands, which are usually
serialized. Hence no waiting generally takes place, but we should
ensure that it actually works if users need that functionality.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agorandom: export add_disk_randomness
Christoph Hellwig [Fri, 25 Apr 2014 07:36:37 +0000 (00:36 -0700)]
random: export add_disk_randomness

This will be needed for pending changes to the scsi midlayer that now
calls lower level block APIs, as well as any blk-mq driver that wants to
contribute to the random pool.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: fold __blk_add_timer into blk_add_timer
Christoph Hellwig [Fri, 25 Apr 2014 12:14:48 +0000 (14:14 +0200)]
block: fold __blk_add_timer into blk_add_timer

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: respect rq_affinity
Christoph Hellwig [Fri, 25 Apr 2014 09:32:53 +0000 (02:32 -0700)]
blk-mq: respect rq_affinity

The blk-mq code is using it's own version of the I/O completion affinity
tunables, which causes a few issues:

 - the rq_affinity sysfs file doesn't work for blk-mq devices, even if it
   still is present, thus breaking existing tuning setups.
 - the rq_affinity = 1 mode, which is the defauly for legacy request based
   drivers isn't implemented at all.
 - blk-mq drivers don't implement any completion affinity with the default
   flag settings.

This patches removes the blk-mq ipi_redirect flag and sysfs file, as well
as the internal BLK_MQ_F_SHOULD_IPI flag and replaces it with code that
respects the queue-wide rq_affinity flags and also implements the
rq_affinity = 1 mode.

This means I/O completion affinity can now only be tuned block-queue wide
instead of per context, which seems more sensible to me anyway.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: fix race with timeouts and requeue events
Jens Axboe [Thu, 24 Apr 2014 14:51:47 +0000 (08:51 -0600)]
blk-mq: fix race with timeouts and requeue events

If a requeue event races with a timeout, we can get into the
situation where we attempt to complete a request from the
timeout handler when it's not start anymore. This causes a crash.
So have the timeout handler check that REQ_ATOM_STARTED is still
set on the request - if not, we ignore the event. If this happens,
the request has now been marked as complete. As a consequence, we
need to ensure to clear REQ_ATOM_COMPLETE in blk_mq_start_request(),
as to maintain proper request state.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoRevert "blk-mq: initialize req->q in allocation"
Jens Axboe [Thu, 24 Apr 2014 14:50:38 +0000 (08:50 -0600)]
Revert "blk-mq: initialize req->q in allocation"

This reverts commit 6a3c8a3ac0e68dcfc2a01f4aa1ca0edd1a1701eb.

We need selective clearing of the request to make the init-at-free
time completely safe. Otherwise we end up stomping on
rq->atomic_flags, which we don't want to do.

10 years agoblk-mq: fix leak of set->tags
Ming Lei [Wed, 23 Apr 2014 16:07:34 +0000 (00:07 +0800)]
blk-mq: fix leak of set->tags

set->tags should be freed in blk_mq_free_tag_set().

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agofs/bio.c: remove nr_segs (unused function parameter)
Fabian Frederick [Tue, 22 Apr 2014 21:09:07 +0000 (15:09 -0600)]
fs/bio.c: remove nr_segs (unused function parameter)

nr_segs is no longer used in bio_alloc_map_data since c8db444820a1e3
("block: Don't save/copy bvec array anymore")

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agofs/bio: remove bs paramater in biovec_create_pool
Fabian Frederick [Tue, 22 Apr 2014 21:09:05 +0000 (15:09 -0600)]
fs/bio: remove bs paramater in biovec_create_pool

bs is no longer used in biovec_create_pool since 9f060e2231ca96 ("block:
Convert integrity to bvec_alloc_bs()")

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock/blk-throttle.c: add static to blk_throtl_dispatch_work_fn
Fabian Frederick [Thu, 17 Apr 2014 19:41:16 +0000 (21:41 +0200)]
block/blk-throttle.c: add static to blk_throtl_dispatch_work_fn

blk_throtl_dispatch_work_fn is only used in blk-throttle.c

Cc: Jens Axboe <axboe@kernel.dk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agofs: fix new kernel-doc warnings in fs/bio.c
Randy Dunlap [Sun, 20 Apr 2014 23:03:31 +0000 (16:03 -0700)]
fs: fix new kernel-doc warnings in fs/bio.c

Fix new kernel-doc warnings in fs/bio.c:

Warning(fs/bio.c:316): No description found for parameter 'bio'
Warning(fs/bio.c:316): No description found for parameter 'parent'

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: initialize req->q in allocation
Ming Lei [Sat, 19 Apr 2014 10:00:19 +0000 (18:00 +0800)]
blk-mq: initialize req->q in allocation

The patch basically reverts the patch of(blk-mq:
initialize request on allocation) in Jens's tree(already
in -next), and only initialize req->q in allocation
for two reasons:

- presumed cache hotness on completion
- blk_rq_tagged(rq) depends on reset of req->mq_ctx

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: user (1 << order) to implement order_to_size()
Ming Lei [Sat, 19 Apr 2014 10:00:18 +0000 (18:00 +0800)]
blk-mq: user (1 << order) to implement order_to_size()

Cc: Jörg-Volker Peetz <jvpeetz@web.de>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: fix allocation of set->tags
Ming Lei [Sat, 19 Apr 2014 10:00:17 +0000 (18:00 +0800)]
blk-mq: fix allocation of set->tags

type of set->tags is struct blk_mq_tags **.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: free hctx->ctx_map when init failed
Ming Lei [Sat, 19 Apr 2014 10:00:16 +0000 (18:00 +0800)]
blk-mq: free hctx->ctx_map when init failed

Avoid memory leak in the failure path.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agosd/skd: stuff discard page in request->completion_data
Jens Axboe [Thu, 17 Apr 2014 03:37:30 +0000 (21:37 -0600)]
sd/skd: stuff discard page in request->completion_data

Store the pointer to the page there, so we can always safely
reference it from end_io context where ->bio may have been
cleared.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agojsflash: missed conversion from rq->buffer to bio_data(rq->bio)
Jens Axboe [Wed, 16 Apr 2014 20:14:33 +0000 (14:14 -0600)]
jsflash: missed conversion from rq->buffer to bio_data(rq->bio)

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: relax when to modify the timeout timer
Jens Axboe [Wed, 16 Apr 2014 17:36:54 +0000 (11:36 -0600)]
block: relax when to modify the timeout timer

Since we are now, by default, applying timer slack to expiry times,
the logic for when to modify a timer in the block code is suboptimal.
The block layer keeps a forward rolling timer per queue for all
requests, and modifies this timer if a request has a shorter timeout
than what the current expiry time is. However, this breaks down
when our rounded timer values get applied slack. Then each new
request ends up modifying the timer, since we're still a little
in front of the timer + slack.

Fix this by allowing a tolerance of HZ / 2, the timeout handling
doesn't need to be very precise. This drastically cuts down
the number of timer modifications we have to make.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: export blk_finish_request
Christoph Hellwig [Wed, 16 Apr 2014 07:44:59 +0000 (09:44 +0200)]
block: export blk_finish_request

This allows to mirror the blk-mq code flow for more a more readable I/O
completion handler in SCSI.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: rename mq_flush_work struct request member
Christoph Hellwig [Wed, 16 Apr 2014 07:44:58 +0000 (09:44 +0200)]
blk-mq: rename mq_flush_work struct request member

We will use this work_struct to requeue scsi commands from the
completion handler as well, so give it a more generic name.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: add blk_mq_requeue_request
Christoph Hellwig [Wed, 16 Apr 2014 07:44:57 +0000 (09:44 +0200)]
blk-mq: add blk_mq_requeue_request

This allows to requeue a request that has been accepted by ->queue_rq
earlier.  This is needed by the SCSI layer in various error conditions.

The existing internal blk_mq_requeue_request is renamed to
__blk_mq_requeue_request as it is a lower level building block for this
funtionality.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: add blk_mq_start_hw_queues
Christoph Hellwig [Wed, 16 Apr 2014 07:44:56 +0000 (09:44 +0200)]
blk-mq: add blk_mq_start_hw_queues

Add a helper to unconditionally kick contexts of a queue.  This will
be needed by the SCSI layer to provide fair queueing between multiple
devices on a single host.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: add blk_mq_delay_queue
Christoph Hellwig [Wed, 16 Apr 2014 16:48:08 +0000 (10:48 -0600)]
blk-mq: add blk_mq_delay_queue

Add a blk-mq equivalent to blk_delay_queue so that the scsi layer can ask
to be kicked again after a delay.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Modified by me to kill the unnecessary preempt disable/enable
in the delayed workqueue handler.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: add async parameter to blk_mq_start_stopped_hw_queues
Christoph Hellwig [Wed, 16 Apr 2014 07:44:54 +0000 (09:44 +0200)]
blk-mq: add async parameter to blk_mq_start_stopped_hw_queues

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: bidi support
Christoph Hellwig [Wed, 16 Apr 2014 07:44:53 +0000 (09:44 +0200)]
blk-mq: bidi support

Add two unlinkely branches to make sure the resid is initialized correctly
for bidi request pairs, and the second request gets properly freed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: allow drivers to hook into I/O completion
Christoph Hellwig [Wed, 16 Apr 2014 07:44:52 +0000 (09:44 +0200)]
blk-mq: allow drivers to hook into I/O completion

Split out the bottom half of blk_mq_end_io so that drivers can perform
work when they know a request has been completed, but before it has been
freed.  This also obsoletes blk_mq_end_io_partial as drivers can now
pass any value to blk_update_request directly.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: kill preempt disable/enable in blk_mq_work_fn()
Jens Axboe [Wed, 16 Apr 2014 16:38:35 +0000 (10:38 -0600)]
blk-mq: kill preempt disable/enable in blk_mq_work_fn()

blk_mq_work_fn() is always invoked off the bounded workqueues,
so it can happily preempt among the queues in that set without
causing any issues for blk-mq.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: don't use preempt_count() to check for right CPU
Jens Axboe [Wed, 16 Apr 2014 15:23:48 +0000 (09:23 -0600)]
blk-mq: don't use preempt_count() to check for right CPU

UP or CONFIG_PREEMPT_NONE will return 0, and what we really
want to check is whether or not we are on the right CPU.
So don't make PREEMPT part of this, just test the CPU in
the mask directly.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agogdrom: missed conversion from req->buffer
Jens Axboe [Wed, 16 Apr 2014 14:26:20 +0000 (08:26 -0600)]
gdrom: missed conversion from req->buffer

The friendly Intel kbuild test robot reported:

drivers/cdrom/gdrom.c: In function 'gdrom_readdisk_dma':
drivers/cdrom/gdrom.c:605:3: error: 'struct request' has no member named 'buffer'

Convert that from req->buffer to bio_data(rq->bio). Apparently
my grep missed this one, and I don't build for Sega Dreamcast
enough.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: all blk-mq requests are tagged
Christoph Hellwig [Mon, 14 Apr 2014 08:30:12 +0000 (10:30 +0200)]
block: all blk-mq requests are tagged

Instead of setting the REQ_QUEUED flag on each of them just take it into
account in the only macro checking it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: split out tag initialization, support shared tags
Christoph Hellwig [Tue, 15 Apr 2014 20:14:00 +0000 (14:14 -0600)]
blk-mq: split out tag initialization, support shared tags

Add a new blk_mq_tag_set structure that gets set up before we initialize
the queue.  A single blk_mq_tag_set structure can be shared by multiple
queues.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Modular export of blk_mq_{alloc,free}_tagset added by me.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: initialize request on allocation
Christoph Hellwig [Mon, 14 Apr 2014 08:30:10 +0000 (10:30 +0200)]
blk-mq: initialize request on allocation

If we want to share tag and request allocation between queues we cannot
initialize the request at init/free time, but need to initialize it
at allocation time as it might get used for different queues over its
lifetime.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: add ->init_request and ->exit_request methods
Christoph Hellwig [Tue, 15 Apr 2014 19:59:10 +0000 (13:59 -0600)]
blk-mq: add ->init_request and ->exit_request methods

The current blk_mq_init_commands/blk_mq_free_commands interface has a
two problems:

 1) Because only the constructor is passed to blk_mq_init_commands there
    is no easy way to clean up when a comman initialization failed.  The
    current code simply leaks the allocations done in the constructor.

 2) There is no good place to call blk_mq_free_commands: before
    blk_cleanup_queue there is no guarantee that all outstanding
    commands have completed, so we can't free them yet.  After
    blk_cleanup_queue the queue has usually been freed.  This can be
    worked around by grabbing an unconditional reference before calling
    blk_cleanup_queue and dropping it after blk_mq_free_commands is
    done, although that's not exatly pretty and driver writers are
    guaranteed to get it wrong sooner or later.

Both issues are easily fixed by making the request constructor and
destructor normal blk_mq_ops methods.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: make ->flush_rq fully transparent to drivers
Christoph Hellwig [Mon, 14 Apr 2014 08:30:08 +0000 (10:30 +0200)]
blk-mq: make ->flush_rq fully transparent to drivers

Drivers shouldn't have to care about the block layer setting aside a
request to implement the flush state machine.  We already override the
mq context and tag to make it more transparent, but so far haven't deal
with the driver private data in the request.  Make sure to override this
as well, and while we're at it add a proper helper sitting in blk-mq.c
that implements the full impersonation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: do not initialize req->special
Christoph Hellwig [Mon, 14 Apr 2014 08:30:07 +0000 (10:30 +0200)]
blk-mq: do not initialize req->special

Drivers can reach their private data easily using the blk_mq_rq_to_pdu
helper and don't need req->special.  By not initializing it code can
be simplified nicely, and we also shave off a few more instructions from
the I/O path.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: initialize resid_len
Christoph Hellwig [Mon, 14 Apr 2014 08:30:06 +0000 (10:30 +0200)]
blk-mq: initialize resid_len

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: remove struct request buffer member
Jens Axboe [Thu, 10 Apr 2014 15:46:28 +0000 (09:46 -0600)]
block: remove struct request buffer member

This was used in the olden days, back when onions were proper
yellow. Basically it mapped to the current buffer to be
transferred. With highmem being added more than a decade ago,
most drivers map pages out of a bio, and rq->buffer isn't
pointing at anything valid.

Convert old style drivers to just use bio_data().

For the discard payload use case, just reference the page
in the bio.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoMerge tag 'v3.15-rc1' into for-3.16/core
Jens Axboe [Tue, 15 Apr 2014 20:02:24 +0000 (14:02 -0600)]
Merge tag 'v3.15-rc1' into for-3.16/core

We don't like this, but things have diverged with the blk-mq fixes
in 3.15-rc1. So merge it in.

10 years agoLinux 3.15-rc1
Linus Torvalds [Sun, 13 Apr 2014 21:18:35 +0000 (14:18 -0700)]
Linux 3.15-rc1

10 years agomm: Initialize error in shmem_file_aio_read()
Geert Uytterhoeven [Sun, 13 Apr 2014 18:46:22 +0000 (20:46 +0200)]
mm: Initialize error in shmem_file_aio_read()

Some versions of gcc even warn about it:

  mm/shmem.c: In function ‘shmem_file_aio_read’:
  mm/shmem.c:1414: warning: ‘error’ may be used uninitialized in this function

If the loop is aborted during the first iteration by one of the two
first break statements, error will be uninitialized.

Introduced by commit 6e58e79db8a1 ("introduce copy_page_to_iter, kill
loop over iovec in generic_file_aio_read()").

Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
10 years agocifs: Use min_t() when comparing "size_t" and "unsigned long"
Geert Uytterhoeven [Sun, 13 Apr 2014 18:46:21 +0000 (20:46 +0200)]
cifs: Use min_t() when comparing "size_t" and "unsigned long"

On 32 bit, size_t is "unsigned int", not "unsigned long", causing the
following warning when comparing with PAGE_SIZE, which is always "unsigned
long":

  fs/cifs/file.c: In function ‘cifs_readdata_to_iov’:
  fs/cifs/file.c:2757: warning: comparison of distinct pointer types lacks a cast

Introduced by commit 7f25bba819a3 ("cifs_iovec_read: keep iov_iter
between the calls of cifs_readdata_to_iov()"), which changed the
signedness of "remaining" and the code from min_t() to min().

Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
10 years agoMerge branch 'slab/next' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg...
Linus Torvalds [Sun, 13 Apr 2014 20:28:13 +0000 (13:28 -0700)]
Merge branch 'slab/next' of git://git./linux/kernel/git/penberg/linux

Pull slab changes from Pekka Enberg:
 "The biggest change is byte-sized freelist indices which reduces slab
  freelist memory usage:

    https://lkml.org/lkml/2013/12/2/64"

* 'slab/next' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/linux:
  mm: slab/slub: use page->list consistently instead of page->lru
  mm/slab.c: cleanup outdated comments and unify variables naming
  slab: fix wrongly used macro
  slub: fix high order page allocation problem with __GFP_NOFAIL
  slab: Make allocations with GFP_ZERO slightly more efficient
  slab: make more slab management structure off the slab
  slab: introduce byte sized index for the freelist of a slab
  slab: restrict the number of objects in a slab
  slab: introduce helper functions to get/set free object
  slab: factor out calculate nr objects in cache_estimate

10 years agoMerge branch 'misc' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild
Linus Torvalds [Sun, 13 Apr 2014 01:22:27 +0000 (18:22 -0700)]
Merge branch 'misc' of git://git./linux/kernel/git/mmarek/kbuild

Pull misc kbuild changes from Michal Marek:
 "Here is the non-critical part of kbuild:
   - One bogus coccinelle check removed, one check fixed not to suggest
     the obsolete PTR_RET macro
   - scripts/tags.sh does not index the generated *.mod.c files
   - new objdiff tool to list differences between two versions of an
     object file
   - A fix for scripts/bootgraph.pl"

* 'misc' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild:
  scripts/coccinelle: Use PTR_ERR_OR_ZERO
  scripts/bootgraph.pl: Add graphic header
  scripts: objdiff: detect object code changes between two commits
  Coccicheck: Remove memcpy to struct assignment test
  scripts/tags.sh: Ignore *.mod.c

10 years agosym53c8xx_2: Set DID_REQUEUE return code when aborting squeue
Mikulas Patocka [Wed, 9 Apr 2014 01:52:05 +0000 (21:52 -0400)]
sym53c8xx_2: Set DID_REQUEUE return code when aborting squeue

This patch fixes I/O errors with the sym53c8xx_2 driver when the disk
returns QUEUE FULL status.

When the controller encounters an error (including QUEUE FULL or BUSY
status), it aborts all not yet submitted requests in the function
sym_dequeue_from_squeue.

This function aborts them with DID_SOFT_ERROR.

If the disk has full tag queue, the request that caused the overflow is
aborted with QUEUE FULL status (and the scsi midlayer properly retries
it until it is accepted by the disk), but the sym53c8xx_2 driver aborts
the following requests with DID_SOFT_ERROR --- for them, the midlayer
does just a few retries and then signals the error up to sd.

The result is that disk returning QUEUE FULL causes request failures.

The error was reproduced on 53c895 with COMPAQ BD03685A24 disk
(rebranded ST336607LC) with command queue 48 or 64 tags.  The disk has
64 tags, but under some access patterns it return QUEUE FULL when there
are less than 64 pending tags.  The SCSI specification allows returning
QUEUE FULL anytime and it is up to the host to retry.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: Matthew Wilcox <matthew@wil.cx>
Cc: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
10 years agopowerpc: Don't try to set LPCR unless we're in hypervisor mode
Paul Mackerras [Fri, 11 Apr 2014 06:43:35 +0000 (16:43 +1000)]
powerpc: Don't try to set LPCR unless we're in hypervisor mode

Commit 8f619b5429d9 ("powerpc/ppc64: Do not turn AIL (reloc-on
interrupts) too early") added code to set the AIL bit in the LPCR
without checking whether the kernel is running in hypervisor mode.  The
result is that when the kernel is running as a guest (i.e., under
PowerKVM or PowerVM), the processor takes a privileged instruction
interrupt at that point, causing a panic.  The visible result is that
the kernel hangs after printing "returning from prom_init".

This fixes it by checking for hypervisor mode being available before
setting LPCR.  If we are not in hypervisor mode, we enable relocation-on
interrupts later in pSeries_setup_arch using the H_SET_MODE hcall.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
10 years agofutex: update documentation for ordering guarantees
Davidlohr Bueso [Wed, 9 Apr 2014 18:55:07 +0000 (11:55 -0700)]
futex: update documentation for ordering guarantees

Commits 11d4616bd07f ("futex: revert back to the explicit waiter
counting code") and 69cd9eba3886 ("futex: avoid race between requeue and
wake") changed some of the finer details of how we think about futexes.
One was a late fix and the other a consequence of overlooking the whole
requeuing logic.

The first change caused our documentation to be incorrect, and the
second made us aware that we need to explicitly add more details to it.

Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
10 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Linus Torvalds [Sun, 13 Apr 2014 00:31:22 +0000 (17:31 -0700)]
Merge git://git./linux/kernel/git/davem/net

Pull yet more networking updates from David Miller:

 1) Various fixes to the new Redpine Signals wireless driver, from
    Fariya Fatima.

 2) L2TP PPP connect code takes PMTU from the wrong socket, fix from
    Dmitry Petukhov.

 3) UFO and TSO packets differ in whether they include the protocol
    header in gso_size, account for that in skb_gso_transport_seglen().
   From Florian Westphal.

 4) If VLAN untagging fails, we double free the SKB in the bridging
    output path.  From Toshiaki Makita.

 5) Several call sites of sk->sk_data_ready() were referencing an SKB
    just added to the socket receive queue in order to calculate the
    second argument via skb->len.  This is dangerous because the moment
    the skb is added to the receive queue it can be consumed in another
    context and freed up.

    It turns out also that none of the sk->sk_data_ready()
    implementations even care about this second argument.

    So just kill it off and thus fix all these use-after-free bugs as a
    side effect.

 6) Fix inverted test in tcp_v6_send_response(), from Lorenzo Colitti.

 7) pktgen needs to do locking properly for LLTX devices, from Daniel
    Borkmann.

 8) xen-netfront driver initializes TX array entries in RX loop :-) From
    Vincenzo Maffione.

 9) After refactoring, some tunnel drivers allow a tunnel to be
    configured on top itself.  Fix from Nicolas Dichtel.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (46 commits)
  vti: don't allow to add the same tunnel twice
  gre: don't allow to add the same tunnel twice
  drivers: net: xen-netfront: fix array initialization bug
  pktgen: be friendly to LLTX devices
  r8152: check RTL8152_UNPLUG
  net: sun4i-emac: add promiscuous support
  net/apne: replace IS_ERR and PTR_ERR with PTR_ERR_OR_ZERO
  net: ipv6: Fix oif in TCP SYN+ACK route lookup.
  drivers: net: cpsw: enable interrupts after napi enable and clearing previous interrupts
  drivers: net: cpsw: discard all packets received when interface is down
  net: Fix use after free by removing length arg from sk_data_ready callbacks.
  Drivers: net: hyperv: Address UDP checksum issues
  Drivers: net: hyperv: Negotiate suitable ndis version for offload support
  Drivers: net: hyperv: Allocate memory for all possible per-pecket information
  bridge: Fix double free and memory leak around br_allowed_ingress
  bonding: Remove debug_fs files when module init fails
  i40evf: program RSS LUT correctly
  i40evf: remove open-coded skb_cow_head
  ixgb: remove open-coded skb_cow_head
  igbvf: remove open-coded skb_cow_head
  ...

10 years agoMerge tag 'blackfin-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/realm...
Linus Torvalds [Sun, 13 Apr 2014 00:26:45 +0000 (17:26 -0700)]
Merge tag 'blackfin-for-linus' of git://git./linux/kernel/git/realmz6/blackfin-linux

Pull blackfin updates from Steven Miao:
 "Code cleanup, some previously ignored patches, and bug fixes"

* tag 'blackfin-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/realmz6/blackfin-linux:
  blackfin: cleanup board files
  bf609: clock: drop unused clock bit set/clear functions
  Blackfin: bf537: rename "CONFIG_ADT75"
  Blackfin: bf537: rename "CONFIG_AD7314"
  Blackfin: bf537: rename ad2s120x ->ad2s1200
  blackfin: bf537: fix typo "CONFIG_SND_SOC_ADV80X_MODULE"
  blackfin: dma: current count mmr is read only
  bfin_crc: Move architecture independant crc header file out of the blackfin folder.
  bf54x: drop unuesd HOST status,control,timeout registers bit define macros
  blackfin: portmux: cleanup head file
  Blackfin: remove "config IP_CHECKSUM_L1"
  blackfin: Remove GENERIC_GPIO config option again
  blackfin:Use generic /proc/interrupts implementation
  blackfin: bf60x: fix typo "CONFIG_PM_BFIN_WAKE_PA15_POL"