Mark Brown [Fri, 11 Oct 2013 18:25:24 +0000 (19:25 +0100)]
Merge branch 'for-lsk' of git://git.linaro.org/arm/big.LITTLE/mp into lsk-v3.10-big.LITTLE
Jon Medhurst [Fri, 11 Oct 2013 16:12:02 +0000 (17:12 +0100)]
Merge tag 'big-LITTLE-MP-13.10' into for-lsk
Chris Redpath [Fri, 11 Oct 2013 10:45:04 +0000 (11:45 +0100)]
HMP: Implement task packing for small tasks in HMP systems
If we wake up a task on a little CPU, fill CPUs rather than
spread. Adds 2 new files to sys/kernel/hmp to control packing
behaviour.
packing_enable: task packing enabled (1) or disabled (0)
packing_limit: Runqueues will be filled up to this load ratio.
This functionality is disabled by default on TC2 as it lacks per-cpu
power gating so packing small tasks there doesn't make sense.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Fri, 11 Oct 2013 10:45:03 +0000 (11:45 +0100)]
hmp: Remove potential for task_struct access race
Accessing the task_struct can be racy in certain conditions, so
we need to only acquire the data when needed.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Fri, 11 Oct 2013 10:45:02 +0000 (11:45 +0100)]
sched: HMP: fix potential logical errors
The previous API for hmp_up_migration reset the destination
CPU every time, regardless of if a migration was desired. The code
using it assumed that the value would not be changed unless
a migration was required. In one rare circumstance, this could
have lead to a task migrating to a little CPU at the wrong time.
Fixing that lead to a slight logical tweak to make the surrounding
APIs operate a bit more obviously.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Robin Randhawa <robin.randhawa@arm.com>
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Fri, 11 Oct 2013 10:45:01 +0000 (11:45 +0100)]
smp: smp_cross_call function pointer tracing
generic tracing for smp_cross_call function calls
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Fri, 11 Oct 2013 10:45:00 +0000 (11:45 +0100)]
arm: ipi raise/start/end tracing
Add tracepoints for IPI raise events, and start and end of the
ipi handler.
Used to inspect the source of CPU wake-ups which are not already
traced - all other reasons for a CPU to wake-up are already
covered.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Fri, 11 Oct 2013 10:44:59 +0000 (11:44 +0100)]
sched: HMP: Additional trace points for debugging HMP behaviour
1. Replace magic numbers in code for migration trace.
Trace points still emit a number as force=<n> field:
force=0 : wakeup migration
force=1 : forced migration
force=2 : offload migration
force=3 : idle pull migration
2. Add trace to expose offload decision-making.
Also adds tracing rq->nr_running so that you can
look back to see what state the RQ was in at the time.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Fri, 11 Oct 2013 10:44:58 +0000 (11:44 +0100)]
sched: HMP: Change default HMP thresholds
When the up-threshold is at 512 on TC2, behaviour looks OK since
the graphic-related tasks are very heavy due to lack of a GPU.
Increasing the up-threshold does not reduce power consumption.
When a GPU is present, graphic tasks are much less CPU-heavy and
so additional power may be saved by having a higher threshold.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Mark Brown [Tue, 10 Sep 2013 09:51:00 +0000 (10:51 +0100)]
Merge branch 'for-lsk' of git://git.linaro.org/arm/big.LITTLE/mp into lsk-v3.10-big.LITTLE
Jon Medhurst [Thu, 5 Sep 2013 17:17:48 +0000 (18:17 +0100)]
Merge tag 'big-LITTLE-MP-13.08' into for-lsk
This merge is intended to tidyup the history of the big.LITTLE MP
patchset to enable ongoing maintenance of a standalone MP branch.
The only code change introduce by this merge is to add commit
0d5ddd14 (HMP: select 'best' task for migration rather than 'current')
This change is already in the Linaro Stable Kernel as commit
c5021c1eb9c73f38209180c65bd074ac70c97587
Chris Redpath [Tue, 23 Jul 2013 13:56:45 +0000 (14:56 +0100)]
HMP: Update migration timer when we fork-migrate
Prevents fork-migration adversely interacting with normal
migration (i.e. runqueues containing forked tasks being
selected as migration targets when there is a better
choice available)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Mon, 22 Jul 2013 14:56:28 +0000 (15:56 +0100)]
HMP: Access runqueue task clocks directly.
Avoids accesses through cfs_rq going bad when the cpu_rq doesn't
have a cfs member.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Thu, 8 Aug 2013 15:41:26 +0000 (16:41 +0100)]
HMP: Implement idle pull for HMP
When an A15 goes idle, we should up-migrate anything which is
above the threshold and running on an A7.
Reuses the HMP force-migration spinlock, but adds its own new
cpu stopper client.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Thu, 8 Aug 2013 15:10:39 +0000 (16:10 +0100)]
sched: HMP change nr_running offload metric
rq->nr_running was better than cfs.nr_running, since it includes
all tasks actually on the CPU. However, it includes RT tasks which
we would rather ignore at this point.
Switching to cfs.h_nr_running includes all the CFS tasks but no
RT tasks.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Mon, 15 Jul 2013 15:06:44 +0000 (16:06 +0100)]
HMP: Explicitly implement all-load-is-max-load policy for HMP targets
Experimentally, one of the best policies for HMP migration CPU
selection is to completely ignore part-loaded CPUs and only look
for idle ones. If there are no idle ones, we will choose the one
which was least-recently-disturbed.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Thu, 8 Aug 2013 15:32:31 +0000 (16:32 +0100)]
HMP: Modify the runqueue stats to add a new child stat
The original intent here was to track unweighted runqueue load
with less resolution so we could use the least-recently-disturbed
runqueue to choose between 'closely related' load levels.
However, after experimenting with the resolution it turns out
that the following algorithm is highly beneficial for mobile
workloads.
In hmp_domain_min_load:
* If any CPU is zero, the overall load is zero
* If no CPUs are idle, the domain is 'fully loaded'
Additionally, the time since last migration count is used to
discriminate between idle CPUs.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Thu, 8 Aug 2013 15:31:07 +0000 (16:31 +0100)]
sched: track per-rq 'last migration time'
Track when migrations were performed to runqueues.
Use this to decide between runqueues as migration targets when run
queues in an hmp domain have equal load.
Intention is to spread migration load amongst CPUs more fairly.
When all CPUs in an hmp domain are fully loaded, the existing code
always selects the last CPU as a migration target - this is unfair
and little better than doing no selection.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Morten Rasmussen [Tue, 6 Aug 2013 15:14:19 +0000 (16:14 +0100)]
sched: HMP fix traversing the rb-tree from the curr pointer
The hmp_get_{lightest,heaviest}_task() need to use
__pick_first_entity() to get a pointer to a sched_entity on the rq.
The current is not kept on the rq while running, so its rb-tree node
pointers are no longer valid.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Thu, 8 Aug 2013 15:27:34 +0000 (16:27 +0100)]
HMP: select 'best' task for migration rather than 'current'
When we are looking for a task to migrate up, select the heaviest
one in the first 5 runnable on the runqueue.
Likewise, when looking for a task to offload, select the lightest
one in the first 5 runnable on the runqueue.
Ensure task selected is runnable in the target domain.
This change is necessary in order to implement idle pull in a
sensible manner, but here is used in up-migration and offload to
select the correct target task.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Jon Medhurst (Tixy) [Fri, 2 Aug 2013 17:45:33 +0000 (18:45 +0100)]
HMP: Check the system has little cpus before forcing rt tasks onto them
It is sometimes desirable to run a kernel with HMP scheduling enabled
on a system which is not big.LITTLE, e.g. when building a multi-platform
kernel, or when testing a big.LITTLE system with one cluster disabled.
We should therefore allow for the situation where is no little domain.
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Signed-off-by: Mark Brown <broonie@linaro.org>
Mark Brown [Tue, 20 Aug 2013 09:46:40 +0000 (10:46 +0100)]
Merge branch 'master' of git://git.linaro.org/arm/big.LITTLE/mp into lsk-v3.10-big.LITTLE
Chris Redpath [Mon, 19 Aug 2013 14:06:23 +0000 (15:06 +0100)]
HMP: Update migration timer when we fork-migrate
Prevents fork-migration adversely interacting with normal
migration (i.e. runqueues containing forked tasks being
selected as migration targets when there is a better
choice available)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Mon, 19 Aug 2013 14:06:22 +0000 (15:06 +0100)]
HMP: Access runqueue task clocks directly.
Avoids accesses through cfs_rq going bad when the cpu_rq doesn't
have a cfs member.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Mon, 19 Aug 2013 14:06:21 +0000 (15:06 +0100)]
HMP: Implement idle pull for HMP
When an A15 goes idle, we should up-migrate anything which is
above the threshold and running on an A7.
Reuses the HMP force-migration spinlock, but adds its own new
cpu stopper client.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Mon, 19 Aug 2013 14:06:20 +0000 (15:06 +0100)]
sched: HMP change nr_running offload metric
rq->nr_running was better than cfs.nr_running, since it includes
all tasks actually on the CPU. However, it includes RT tasks which
we would rather ignore at this point.
Switching to cfs.h_nr_running includes all the CFS tasks but no
RT tasks.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Mon, 19 Aug 2013 14:06:19 +0000 (15:06 +0100)]
HMP: Explicitly implement all-load-is-max-load policy for HMP targets
Experimentally, one of the best policies for HMP migration CPU
selection is to completely ignore part-loaded CPUs and only look
for idle ones. If there are no idle ones, we will choose the one
which was least-recently-disturbed.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Mon, 19 Aug 2013 14:06:18 +0000 (15:06 +0100)]
HMP: Modify the runqueue stats to add a new child stat
The original intent here was to track unweighted runqueue load
with less resolution so we could use the least-recently-disturbed
runqueue to choose between 'closely related' load levels.
However, after experimenting with the resolution it turns out
that the following algorithm is highly beneficial for mobile
workloads.
In hmp_domain_min_load:
* If any CPU is zero, the overall load is zero
* If no CPUs are idle, the domain is 'fully loaded'
Additionally, the time since last migration count is used to
discriminate between idle CPUs.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Mon, 19 Aug 2013 14:06:17 +0000 (15:06 +0100)]
sched: track per-rq 'last migration time'
Track when migrations were performed to runqueues.
Use this to decide between runqueues as migration targets when run
queues in an hmp domain have equal load.
Intention is to spread migration load amongst CPUs more fairly.
When all CPUs in an hmp domain are fully loaded, the existing code
always selects the last CPU as a migration target - this is unfair
and little better than doing no selection.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Morten Rasmussen [Mon, 19 Aug 2013 14:06:16 +0000 (15:06 +0100)]
sched: HMP fix traversing the rb-tree from the curr pointer
The hmp_get_{lightest,heaviest}_task() need to use
__pick_first_entity() to get a pointer to a sched_entity on the rq.
The current is not kept on the rq while running, so its rb-tree node
pointers are no longer valid.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Chris Redpath [Mon, 19 Aug 2013 14:06:15 +0000 (15:06 +0100)]
HMP: select 'best' task for migration rather than 'current'
When we are looking for a task to migrate up, select the heaviest
one in the first 5 runnable on the runqueue.
Likewise, when looking for a task to offload, select the lightest
one in the first 5 runnable on the runqueue.
Ensure task selected is runnable in the target domain.
This change is necessary in order to implement idle pull in a
sensible manner, but here is used in up-migration and offload to
select the correct target task.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Jon Medhurst (Tixy) [Fri, 2 Aug 2013 17:45:33 +0000 (18:45 +0100)]
HMP: Check the system has little cpus before forcing rt tasks onto them
It is sometimes desirable to run a kernel with HMP scheduling enabled
on a system which is not big.LITTLE, e.g. when building a multi-platform
kernel, or when testing a big.LITTLE system with one cluster disabled.
We should therefore allow for the situation where is no little domain.
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Signed-off-by: Mark Brown <broonie@linaro.org>
Jon Medhurst [Thu, 18 Jul 2013 10:49:27 +0000 (11:49 +0100)]
Merge branches 'master-arm-multi_pmu_v2', 'master-config-fragments', 'master-hw-bkpt-fix', 'master-misc-patches' and 'master-task-placement-v2-updates' into big-LITTLE-MP-master-v19
Updates:
-------
- Rebased over 3.10 final
- Differences from big-LITTLE-MP-master-v18
- New Patches:
- master-config-fragments: 1 new patch
- "config: Disable priority filtering for HMP Scheduler"
- master-misc-patches: 1 new patch
- "mm: make vmstat_update periodic run conditional"
- New Branches:
- master-task-placement-v2-updates: 7 patches
New patches from ARM added in a new topic branch stacked on top
of master-task-placement-v2-sysfs...
- Revert "sched: Enable HMP priority filter by default"
- "HMP: Use unweighted load for hmp migration decisions"
- "HMP: Select least-loaded CPU when performing HMP Migrations"
- "HMP: Avoid multiple calls to hmp_domain_min_load in fast path"
- "HMP: Force new non-kernel tasks onto big CPUs until load stabilises"
- "sched: Restrict nohz balance kicks to stay in the HMP domain"
- "HMP: experimental: Force all rt tasks to start on little domain."
Commands used for merge:
-----------------------
$ git checkout -b big-LITTLE-MP-master-v19 v3.10
$ git merge master-arm-multi_pmu_v2 master-config-fragments \
master-hw-bkpt-fix master-misc-patches master-task-placement-v2 \
master-task-placement-v2-sysfs master-task-placement-v2-updates
Dietmar Eggemann [Fri, 21 Jun 2013 16:50:08 +0000 (17:50 +0100)]
HMP: experimental: Force all rt tasks to start on little domain.
This patch restricts the allowed cpu mask for rt tasks initially started
with a full cpu mask to the little domain.
An rt task is specified as real time in __setscheduler() which is finally
called for all rt tasks (kernel and user land). In this function we
restrict the allowed cpu mask to the little domain.
This also prevents that a rt tasks can later be pushed to the big domain
because the function find_lowest_rq() will only recognize the allowed cpu
mask of a task to find the new cpu the task runs on.
Current kludges of the patch:
* Since we do not have an API to get the cpu mask of the A7 cluster,
hmp_slow_cpu_mask is made global in arm/kernel/topology.c for now.
* The watchdog_enable() function calls sched_setscheduler() before
kthread_bind() for the cpu specific watchdog kernel threads. The order of
these two calls has to be changed to make this patch work.
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Chris Redpath [Mon, 17 Jun 2013 15:20:37 +0000 (16:20 +0100)]
sched: Restrict nohz balance kicks to stay in the HMP domain
There is little point in doing a nohz balance kick on a CPU from a
different HMP domain, since the unset SD_LOAD_BALANCE flag on the CPU
domain level prevents tasks from being balanced across clusters
except through the per-task load driven hmp_migrate/hmp_offload paths.
Further, the nohz balance kick is actively harmful to power usage if
all the tasks fit into the little domain since it causes the big
domain to wake up and do a lot of calculation to determine that
there is nothing to do.
A more generic solution is to walk the sched domain tree and determine
the intersection of potential idle balance cpus with visibility of
tasks on the current CPU, however HMP domains are more easily
accessible.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Chris Redpath [Mon, 17 Jun 2013 15:08:40 +0000 (16:08 +0100)]
HMP: Force new non-kernel tasks onto big CPUs until load stabilises
Initialise the load stats for new tasks so that they do not
see the instability in early task life which makes it so hard to
decide which CPU is appropriate.
Also, change the fork balance algorithm so that the least loaded of
the CPUs in the big cluster is chosen regardless of the bigness of
the parent task.
This is intended to help performance for applications which use
many short-lived tasks. Although best practise is usually to use
a thread pool, apps which do not do this should not be subject to
the randomness of the early stats.
We should ignore real-time threads for forking on big CPUs, but
it is not possible to figure out if a new thread is real-time or
not at the fork stage. Instead, we prevent kernel threads from
getting the initial boost - when they later become real-time they
will only be on big if their compute requirements demand it.
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Chris Redpath [Thu, 9 May 2013 15:21:29 +0000 (16:21 +0100)]
HMP: Avoid multiple calls to hmp_domain_min_load in fast path
When evaluating a migration we make two calls to hmp_domain_min_load.
This is unnecessary if we pass on the target CPU information from the
hmp_up_migration path.
In hmp_down_migration, we don't consider the load of the target CPUS.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Chris Redpath [Thu, 9 May 2013 15:21:15 +0000 (16:21 +0100)]
HMP: Select least-loaded CPU when performing HMP Migrations
The reference patch set always selects the first CPU in an HMP
domain as a migration target. In busy situations, this means that
the migrated thread cannot make immediate use of an idle CPU but
must share a busy one until the load balancer runs across the big
domain.
This patch uses the hmp_domain_min_load function introduced in
global balancing to figure out which of the CPUs is the least busy
and selects that as a migration target - in both directions.
This essentially implements a task-spread strategy and is intended
to maximise performance of migrated threads but is likely
to use more power than the packing strategy previously employed.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Chris Redpath [Mon, 17 Jun 2013 14:48:15 +0000 (15:48 +0100)]
HMP: Use unweighted load for hmp migration decisions
Normal task and runqueue loading is scaled according to priority
to end up with a weighted load, known as the contribution.
We want the CPU time to be allotted according to priority, but
we also want to make big/little decisions based upon raw load.
It is common, for example, for Android apps following the dev
guide to end up with all their long-running or async action
threads as low priority unless they override the AsyncThread
constructor. All these threads are such low priority that they
become invisible to the hmp_offload routine.
Using unweighted load here allows us to maximise CPU usage in busy
situations.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Chris Redpath [Mon, 17 Jun 2013 14:22:58 +0000 (15:22 +0100)]
Revert "sched: Enable HMP priority filter by default"
This reverts commit
68315334e32932739145ddb41a46cc86b8b056b3.
Having the priority filter enabled prevents proper operation
on Android systems where a wider range of priorities are used
by userspace to partition types of tasks. Those tasks should still
be able to benefit from the use of big CPUs when required.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Gilad Ben-Yossef [Wed, 26 Jun 2013 16:24:59 +0000 (17:24 +0100)]
mm: make vmstat_update periodic run conditional
vmstat_update runs every second from the work queue to update statistics
and drain per cpu pages back into the global page allocator.
This is useful in most circumstances but is wasteful if the CPU doesn't
actually make any VM activity. This can happen in the situtation that
the CPU is idle or running a CPU bound long term task (e.g. CPU
isolation), in which case the periodic vmstate_update timer needlessly
itnerrupts the CPU.
This patch tries to make vmstat_update schedule itself for the next
round only if there was any work for it to do in the previous run.
The assumption is that if for a whole second we didn't see any VM
activity it is reasnoable to assume that the CPU is not using the
VM because it is idle or runs a long term single CPU bound task.
A new single unbound system work queue item is scheduled periodically
to monitor CPUs that have their vmstat_update work stopped and
re-schedule them if VM activity is detected.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
CC: Thomas Gleixner <tglx@linutronix.de>
CC: Tejun Heo <tj@kernel.org>
CC: John Stultz <johnstul@us.ibm.com>
CC: Andrew Morton <akpm@linux-foundation.org>
CC: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
CC: Mel Gorman <mel@csn.ul.ie>
CC: Mike Frysinger <vapier@gentoo.org>
CC: David Rientjes <rientjes@google.com>
CC: Hugh Dickins <hughd@google.com>
CC: Minchan Kim <minchan.kim@gmail.com>
CC: Konstantin Khlebnikov <khlebnikov@openvz.org>
CC: Christoph Lameter <cl@linux.com>
CC: Chris Metcalf <cmetcalf@tilera.com>
CC: Hakan Akkan <hakanakkan@gmail.com>
CC: Max Krasnyansky <maxk@qualcomm.com>
CC: Frederic Weisbecker <fweisbec@gmail.com>
CC: linux-kernel@vger.kernel.org
CC: linux-mm@kvack.org
Chris Redpath [Mon, 17 Jun 2013 14:25:51 +0000 (15:25 +0100)]
config: Disable priority filtering for HMP Scheduler
Android uses threads with very low priority by default to implement
AsyncTask APIs. This means that applications making use of these
APIs to produce multithreaded code are penalised by not allowing
use of big CPUs as necessary.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Chris Redpath [Thu, 16 May 2013 16:48:41 +0000 (17:48 +0100)]
sched: cfs.nr_running does not contain the intended metric
rq->nr_running is the actual number of runnable tasks we wish to use
to determine if a task is alone on a CPU.
Change-Id: Icaf3022e02924ecdc94e14d4146c6fadd9580e2b
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Morten Rasmussen [Thu, 29 Nov 2012 15:41:50 +0000 (15:41 +0000)]
sched: Basic global balancing support for HMP
This patch introduces an extra-check at task up-migration to
prevent overloading the cpus in the faster hmp_domain while the
slower hmp_domain is not fully utilized. The patch also introduces
a periodic balance check that can down-migrate tasks if the faster
domain is oversubscribed and the slower is under-utilized.
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Chris Redpath [Tue, 20 Nov 2012 05:34:49 +0000 (11:04 +0530)]
ARM: Fix build breakage when big.LITTLE.conf is not used.
Change-Id: I8641f5e930c65b5672130bd4a18d9868bb3ca594
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Chris Redpath [Fri, 16 Nov 2012 10:03:00 +0000 (10:03 +0000)]
ARM: Experimental Frequency-Invariant Load Scaling Patch
Evaluation Patch to investigate using load as a representation of the
amount of POTENTIAL cpu compute capacity used rather than a representation
of the CURRENT cpu compute capacity.
If CPUFreq is enabled, scales load in accordance with frequency.
Powersave/performance CPUFreq governors are detected and scaling is
disabled while these governors are in use. This is because when a
single-frequency governor is in use, potential CPU capacity is static.
So long as the governors and CPUFreq subsystem correctly report the
frequencies available, the scaling should self tune.
Adds an additional file to sysfs to allow this feature to be disabled
for experimentation.
/sys/kernel/hmp/frequency_invariant_load_scale
write 0 to disable, 1 to enable.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Olivier Cozette [Wed, 17 Oct 2012 13:30:30 +0000 (14:30 +0100)]
ARM: Change load tracking scale using sysfs
These functions allow to change the load average period used
in the task load average computation through
/sys/kernel/hmp/load_avg_period_ms. This period is the time
in ms to go from 0 to 0.5 load average while running or the
time from 1 to 0.5 while sleeping.
The default one used is 32 and gives the same load_avg_ratio
computation than without this patch. These functions also allow
to change the up and down threshold of HMP using
/sys/kernel/hmp/{up,down}_threshold. Both must be between 0 and
1024. The thresholds are divided by 1024 before being compared
to the load_avg_ratio.
If /sys/kernel/hmp/load_avg_period_ms is 128 and
/sys/kernel/hmp/up_threshold is 512, a task will be migrated
to a bigger cluster after running for 128ms. Because after
load_avg_period_ms the load average is 0.5 and real up_threshold
us 512 / 1024 = 0.5.
Signed-off-by: Olivier Cozette <olivier.cozette@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Chris Redpath [Thu, 16 May 2013 16:48:24 +0000 (17:48 +0100)]
sched: Ignore offline CPUs in HMP migration & load stats
Previously, an offline CPU would always appear to have a zero load
and this would distort the offload functionality used for balancing
big and little domains.
Maintain a mask of online CPUs in each domain and use this instead.
Change-Id: I639b564b2f40cb659af8ceb8bd37f84b8a1fe323
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Chris Redpath [Thu, 16 May 2013 16:48:01 +0000 (17:48 +0100)]
sched: Do not ignore grouped tasks during HMP forced migration.
If the entity is not a task, it is a cfs group rq. Iterate up to
find the task entity.
Change-Id: I7cab7aba0798f6f14e38ad32e566d90e5937ffbc
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Sudeep KarkadaNagesha [Mon, 24 Sep 2012 13:07:20 +0000 (14:07 +0100)]
sched: fix arch_get_fast_and_slow_cpus to get logical cpumask correctly
The patch "sched: Use device-tree to provide fast/slow CPU list for HMP"
depends on the ordering of CPU's in the device tree. It breaks to determine
the logical mask correctly if the logical mask of the CPUs differ from
physical ordering in the device tree.
This patch fix the logic by depending on the mpidr in the device tree
and mapping that mpidr to the logical cpu.
Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Morten Rasmussen [Fri, 12 Oct 2012 14:25:02 +0000 (15:25 +0100)]
sched: Only down migrate low priority tasks if allowed by affinity mask
Adds an extra check intersection of the task affinity mask and the slower
hmp_domain cpumask before down migrating low priority tasks.
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Jon Medhurst [Fri, 12 Oct 2012 12:45:35 +0000 (13:45 +0100)]
ARM: sched: Avoid empty 'slow' HMP domain
On homogeneous (non-heterogeneous) systems all CPUs will be declared
'fast' and the slow cpu list will be empty. In this situation we need to
avoid adding an empty slow HMP domain otherwise the scheduler code will
blow up when it attempts to move a task to the slow domain.
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Morten Rasmussen [Wed, 10 Oct 2012 13:51:25 +0000 (14:51 +0100)]
sched: Enable HMP priority filter by default
This updates the ARM Kconfig to enable the HMP priority filter by default.
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Morten Rasmussen [Fri, 14 Sep 2012 13:38:17 +0000 (14:38 +0100)]
sched: SCHED_HMP multi-domain task migration control
We need a way to prevent tasks that are migrating up and down the
hmp_domains from migrating straight on through before the load has
adapted to the new compute capacity of the CPU on the new hmp_domain.
This patch adds a next up/down migration delay that prevents the task
from doing another migration in the same direction until the delay
has expired.
Signed-off-by: Morten Rasmussen <Morten.Rasmussen@arm.com>
Morten Rasmussen [Fri, 14 Sep 2012 13:38:16 +0000 (14:38 +0100)]
sched: Add HMP task migration ftrace event
Adds ftrace event for tracing task migrations using HMP
optimized scheduling.
Signed-off-by: Morten Rasmussen <Morten.Rasmussen@arm.com>
Morten Rasmussen [Fri, 14 Sep 2012 13:38:15 +0000 (14:38 +0100)]
sched: Add ftrace events for entity load-tracking
Adds ftrace events for key variables related to the entity
load-tracking to help debugging scheduler behaviour. Allows tracing
of load contribution and runqueue residency ratio for both entities
and runqueues as well as entity CPU usage ratio.
Signed-off-by: Morten Rasmussen <Morten.Rasmussen@arm.com>
Morten Rasmussen [Fri, 14 Sep 2012 13:38:14 +0000 (14:38 +0100)]
ARM: sched: Setup SCHED_HMP domains
SCHED_HMP requires the different cpu types to be represented by an
ordered list of hmp_domains. Each hmp_domain represents all cpus of
a particular type using a cpumask.
The list is platform specific and therefore must be generated by
platform code by implementing arch_get_hmp_domains().
Signed-off-by: Morten Rasmussen <Morten.Rasmussen@arm.com>
Morten Rasmussen [Fri, 14 Sep 2012 13:38:13 +0000 (14:38 +0100)]
ARM: sched: Use device-tree to provide fast/slow CPU list for HMP
We can't rely on Kconfig options to set the fast and slow CPU lists for
HMP scheduling if we want a single kernel binary to support multiple
devices with different CPU topology. E.g. TC2 (ARM's Test-Chip-2
big.LITTLE system), Fast Models, or even non big.LITTLE devices.
This patch adds the function arch_get_fast_and_slow_cpus() to generate
the lists at run-time by parsing the CPU nodes in device-tree; it
assumes slow cores are A7s and everything else is fast. The function
still supports the old Kconfig options as this is useful for testing the
HMP scheduler on devices without big.LITTLE.
This patch is reuse of a patch by Jon Medhurst <tixy@linaro.org> with a
few bits left out.
Signed-off-by: Morten Rasmussen <Morten.Rasmussen@arm.com>
Morten Rasmussen [Fri, 14 Sep 2012 13:38:12 +0000 (14:38 +0100)]
ARM: Add HMP scheduling support for ARM architecture
Adds Kconfig entries to enable HMP scheduling on ARM platforms.
Currently, it disables CPU level sched_domain load-balacing in order
to simplify things. This needs fixing in a later revision. HMP
scheduling will do the load-balancing at this level instead.
Signed-off-by: Morten Rasmussen <Morten.Rasmussen@arm.com>
Morten Rasmussen [Fri, 14 Sep 2012 13:38:11 +0000 (14:38 +0100)]
sched: Introduce priority-based task migration filter
Introduces a priority threshold which prevents low priority task
from migrating to faster hmp_domains (cpus). This is useful for
user-space software which assigns lower task priority to background
task.
Signed-off-by: Morten Rasmussen <Morten.Rasmussen@arm.com>
Morten Rasmussen [Fri, 14 Sep 2012 13:38:10 +0000 (14:38 +0100)]
sched: Forced task migration on heterogeneous systems
This patch introduces forced task migration for moving suitable
currently running tasks between hmp_domains. Task behaviour is likely
to change over time. Tasks running in a less capable hmp_domain may
change to become more demanding and should therefore be migrated up.
They are unlikely go through the select_task_rq_fair() path anytime
soon and therefore need special attention.
This patch introduces a period check (SCHED_TICK) of the currently
running task on all runqueues and sets up a forced migration using
stop_machine_no_wait() if the task needs to be migrated.
Ideally, this should not be implemented by polling all runqueues.
Signed-off-by: Morten Rasmussen <Morten.Rasmussen@arm.com>
Morten Rasmussen [Fri, 14 Sep 2012 13:38:09 +0000 (14:38 +0100)]
sched: Task placement for heterogeneous systems based on task load-tracking
This patch introduces the basic SCHED_HMP infrastructure. Each class of
cpus is represented by a hmp_domain and tasks will only be moved between
these domains when their load profiles suggest it is beneficial.
SCHED_HMP relies heavily on the task load-tracking introduced in Paul
Turners fair group scheduling patch set:
<https://lkml.org/lkml/2012/8/23/267>
SCHED_HMP requires that the platform implements arch_get_hmp_domains()
which should set up the platform specific list of hmp_domains. It is
also assumed that the platform disables SD_LOAD_BALANCE for the
appropriate sched_domains.
Tasks placement takes place every time a task is to be inserted into
a runqueue based on its load history. The task placement decision is
based on load thresholds.
There are no restrictions on the number of hmp_domains, however,
multiple (>2) has not been tested and the up/down migration policy is
rather simple.
Signed-off-by: Morten Rasmussen <Morten.Rasmussen@arm.com>
Morten Rasmussen [Fri, 14 Sep 2012 13:38:08 +0000 (14:38 +0100)]
sched: entity load-tracking load_avg_ratio
This patch adds load_avg_ratio to each task. The load_avg_ratio is a
variant of load_avg_contrib which is not scaled by the task priority. It
is calculated like this:
runnable_avg_sum * NICE_0_LOAD / (runnable_avg_period + 1).
Signed-off-by: Morten Rasmussen <Morten.Rasmussen@arm.com>
Paul Turner [Fri, 21 Sep 2012 20:27:51 +0000 (13:27 -0700)]
sched: implement usage tracking
With the frame-work for runnable tracking now fully in place. Per-entity usage
tracking is a simple and low-overhead addition.
Signed-off-by: Paul Turner <pjt@google.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Thomas Gleixner [Fri, 25 May 2012 14:59:47 +0000 (16:59 +0200)]
genirq: Add default affinity mask command line option
If we isolate CPUs, then we don't want random device interrupts on
them. Even w/o the user space irq balancer enabled we can end up with
irqs on non boot cpus.
Allow to restrict the default irq affinity mask.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Lokesh Vutla [Wed, 13 Mar 2013 06:52:33 +0000 (06:52 +0000)]
ARM: hw_breakpoint: Enable debug powerdown only if system supports 'has_ossr'
Commit {
9a6eb31 ARM: hw_breakpoint: Debug powerdown support for self-hosted
debug} introduces debug powerdown support for self-hosted debug.
While merging the patch 'has_ossr' check was removed which
was needed for hardwares which doesn't support self-hosted debug.
Pandaboard (A9) is one such hardware and Dietmar's orginial
patch did mention this issue.
Without that check on Panda with CPUIDLE enabled, a flood of
below messages thrown.
[ 3.597930] hw-breakpoint: CPU 0 failed to disable vector catch
[ 3.597991] hw-breakpoint: CPU 1 failed to disable vector catch
So restore that check back to avoid the mentioned issue.
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reported-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com>
Liviu Dudau [Fri, 16 Nov 2012 18:32:44 +0000 (18:32 +0000)]
linaro/configs: big-LITTLE-MP: Enable the new tunable sysfs interface by default.
Enable the new tunable sysfs interface for HMP scaling invariants.
Signed-of-by: Liviu Dudau <Liviu.Dudau@arm.com>
Morten Rasmussen [Wed, 10 Oct 2012 13:51:25 +0000 (14:51 +0100)]
linaro/configs: Enable HMP priority filter by default
This updates linaro config fragments to enable the HMP priority filter by
default.
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Viresh Kumar [Wed, 12 Sep 2012 03:34:17 +0000 (09:04 +0530)]
config-frag/big-LITTLE: Use device-tree to provide fast/slow CPU list for HMP
Currently there are two ways of passing list of fast-slow CPU's to kernel. One
via configs and other via DT. Code tries to get them via configs first an then
try for DT.
To make it configurable via DT by default, make config strings empty.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reported-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Viresh Kumar [Wed, 11 Jul 2012 08:55:22 +0000 (09:55 +0100)]
linaro/configs: Update big LITTLE MP fragment for task placement work
CONFIG_HMP_FAST_CPU_MASK and CONFIG_HMP_SLOW_CPU_MASK must be set correctly by
user platform. For now they are marked 0-1 and 2-3.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Viresh Kumar [Tue, 10 Jul 2012 13:47:10 +0000 (14:47 +0100)]
configs: Add config fragments for big LITTLE MP
This patch adds config fragments used to enable most of the features used by
big LITTLE MP.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Sudeep KarkadaNagesha [Tue, 25 Sep 2012 17:40:12 +0000 (18:40 +0100)]
ARM: perf: save/restore pmu registers in pm notifier
This adds core support for saving and restoring CPU PMU registers
for suspend/resume support i.e. deeper C-states in cpuidle terms.
This patch adds support only to ARMv7 PMU registers save/restore.
It needs to be extended to xscale and ARMv6 if needed.
Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Sudeep KarkadaNagesha [Tue, 25 Sep 2012 16:36:12 +0000 (17:36 +0100)]
ARM: perf: remove spaces in CPU PMU names
The userspace perf tool provides options to specify PMU names from command
line for the event. An example of pmu event syntax would be
(<pmu_name>/<config>/<modifier>)
However the parser in the perf tool breaks the tokens at spacesand fails to
identify the PMU name with spaces correctly.
This patch removes spaces in the ARMv7 CPU PMU names.
Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Sudeep KarkadaNagesha [Tue, 25 Sep 2012 16:30:45 +0000 (17:30 +0100)]
ARM: perf: set cpu affinity for the irqs correctly
This patch sets the cpu affinity for the perf IRQs in the logical order
within the cluster. However interupts are assumed to be specified in the
same logical order within the cluster.
Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Sudeep KarkadaNagesha [Tue, 25 Sep 2012 16:26:51 +0000 (17:26 +0100)]
ARM: perf: set cpu affinity to support multiple PMUs
In a system with multiple heterogeneous CPU PMUs and each PMUs can handle
events on a subset of CPUs, probably belonging a the same cluster.
This patch introduces a cpumask to track which CPUs each PMU supports.
It also updates armpmu_event_init to reject cpu-specific events being
initialised for unsupported CPUs. Since process-specific events can be
initialised for all the CPU PMUs,armpmu_start/stop/add are modified to
prevent from being added on unsupported CPUs.
Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Sudeep KarkadaNagesha [Tue, 25 Sep 2012 16:26:50 +0000 (17:26 +0100)]
ARM: perf: register CPU PMUs with idr types
In order to support multiple, heterogeneous CPU PMUs and distinguish
them, they cannot be registered as PERF_TYPE_RAW type. Instead we can
get perf core to allocate a new idr type id for each PMU.
Userspace applications can refer sysfs entried to find a PMU's type,
which can then be used in tracking events on individual PMUs.
Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Sudeep KarkadaNagesha [Thu, 20 Sep 2012 16:53:42 +0000 (17:53 +0100)]
ARM: perf: replace global CPU PMU pointer with per-cpu pointers
A single global CPU PMU pointer is not useful in a system with multiple,
heterogeneous CPU PMUs as we need to access the relevant PMU depending
on the current CPU.
This patch replaces the single global CPU PMU pointer with per-cpu
pointers and changes the OProfile accessors to refer to the PMU affine
to CPU0.
Signed-off-by: Sudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Lorenzo Pieralisi [Mon, 10 Sep 2012 15:06:30 +0000 (16:06 +0100)]
ARM: kernel: provide cluster to logical cpu mask mapping API
Some device drivers like PMU require to retrieve the logical cpu mask
that corresponds to a given cluster id. This patch provides a hook in
the topology code that, given an existing cluster id as input,
initializes the corresponding cpumask passed as a pointer, reusing all
existing topology information required by sched domains in the kernel.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Linus Torvalds [Sun, 30 Jun 2013 22:13:29 +0000 (15:13 -0700)]
Linux 3.10
Linus Torvalds [Sun, 30 Jun 2013 22:08:15 +0000 (15:08 -0700)]
Merge branch 'merge' of git://git./linux/kernel/git/benh/powerpc
Pull another powerpc fix from Benjamin Herrenschmidt:
"I mentioned that while we had fixed the kernel crashes, EEH error
recovery didn't always recover... It appears that I had a fix for
that already in powerpc-next (with a stable CC).
I cherry-picked it today and did a few tests and it seems that things
now work quite well. The patch is also pretty simple, so I see no
reason to wait before merging it."
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc:
powerpc/eeh: Fix fetching bus for single-dev-PE
Linus Torvalds [Sun, 30 Jun 2013 22:06:25 +0000 (15:06 -0700)]
Merge tag 'scsi-fixes' of git://git./linux/kernel/git/jejb/scsi
Pull SCSI fixes from James Bottomley:
"This is a set of seven bug fixes. Several fcoe fixes for locking
problems, initiator issues and a VLAN API change, all of which could
eventually lead to data corruption, one fix for a qla2xxx locking
problem which could lead to multiple completions of the same request
(and subsequent data corruption) and a use after free in the ipr
driver. Plus one minor MAINTAINERS file update"
(only six bugfixes in this pull, since I had already pulled the fcoe API
fix directly from Robert Love)
* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
[SCSI] ipr: Avoid target_destroy accessing memory after it was freed
[SCSI] qla2xxx: Fix for locking issue between driver ISR and mailbox routines
MAINTAINERS: Fix fcoe mailing list
libfc: extend ex_lock to protect all of fc_seq_send
libfc: Correct check for initiator role
libfcoe: Fix Conflicting FCFs issue in the fabric
Gavin Shan [Wed, 5 Jun 2013 07:34:02 +0000 (15:34 +0800)]
powerpc/eeh: Fix fetching bus for single-dev-PE
While running Linux as guest on top of phyp, we possiblly have
PE that includes single PCI device. However, we didn't return
its PCI bus correctly and it leads to failure on recovery from
EEH errors for single-dev-PE. The patch fixes the issue.
Cc: <stable@vger.kernel.org> # v3.7+
Cc: Steve Best <sbest@us.ibm.com>
Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Linus Torvalds [Sun, 30 Jun 2013 00:02:48 +0000 (17:02 -0700)]
Merge branch 'merge' of git://git./linux/kernel/git/benh/powerpc
Pull powerpc fixes from Ben Herrenschmidt:
"We discovered some breakage in our "EEH" (PCI Error Handling) code
while doing error injection, due to a couple of regressions. One of
them is due to a patch (
37f02195bee9 "powerpc/pci: fix PCI-e devices
rescan issue on powerpc platform") that, in hindsight, I shouldn't
have merged considering that it caused more problems than it solved.
Please pull those two fixes. One for a simple EEH address cache
initialization issue. The other one is a patch from Guenter that I
had originally planned to put in 3.11 but which happens to also fix
that other regression (a kernel oops during EEH error handling and
possibly hotplug).
With those two, the couple of test machines I've hammered with error
injection are remaining up now. EEH appears to still fail to recover
on some devices, so there is another problem that Gavin is looking
into but at least it's no longer crashing the kernel."
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc:
powerpc/pci: Improve device hotplug initialization
powerpc/eeh: Add eeh_dev to the cache during boot
Olof Johansson [Sat, 29 Jun 2013 23:25:14 +0000 (16:25 -0700)]
ARM: dt: Only print warning, not WARN() on bad cpu map in device tree
Due to recent changes and expecations of proper cpu bindings, there are
now cases for many of the in-tree devicetrees where a WARN() will hit
on boot due to badly formatted /cpus nodes.
Downgrade this to a pr_warn() to be less alarmist, since it's not a
new problem.
Tested on Arndale, Cubox, Seaboard and Panda ES. Panda hits the WARN
without this, the others do not.
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Guenter Roeck [Mon, 10 Jun 2013 17:18:08 +0000 (10:18 -0700)]
powerpc/pci: Improve device hotplug initialization
Commit
37f02195b (powerpc/pci: fix PCI-e devices rescan issue on powerpc
platform) fixes a problem with interrupt and DMA initialization on hot
plugged devices. With this commit, interrupt and DMA initialization for
hot plugged devices is handled in the pci device enable function.
This approach has a couple of drawbacks. First, it creates two code paths
for device initialization, one for hot plugged devices and another for devices
known during the initial PCI scan. Second, the initialization code for hot
plugged devices is only called when the device is enabled, ie typically
in the probe function. Also, the platform specific setup code is called each
time pci_enable_device() is called, not only once during device discovery,
meaning it is actually called multiple times, once for devices discovered
during the initial scan and again each time a driver is re-loaded.
The visible result is that interrupt pins are only assigned to hot plugged
devices when the device driver is loaded. Effectively this changes the PCI
probe API, since pci_dev->irq and the device's dma configuration will now
only be valid after pci_enable() was called at least once. A more subtle
change is that platform specific PCI device setup is moved from device
discovery into the driver's probe function, more specifically into the
pci_enable_device() call.
To fix the inconsistencies, add new function pcibios_add_device.
Call pcibios_setup_device from pcibios_setup_bus_devices if device setup
is not complete, and from pcibios_add_device if bus setup is complete.
With this change, device setup code is moved back into device initialization,
and called exactly once for both static and hot plugged devices.
[ This also fixes a regression introduced by the above patch which
causes dev->irq to be overwritten under some cirumstances after
MSIs have been enabled for the device which leads to crashes due
to the MSI core "hijacking" dev->irq to store the base MSI number
and not the LSI. --BenH
]
Cc: Yuanquan Chen <Yuanquan.Chen@freescale.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Hiroo Matsumoto <matsumoto.hiroo@jp.fujitsu.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Linus Torvalds [Sat, 29 Jun 2013 18:34:18 +0000 (11:34 -0700)]
Merge git://git./linux/kernel/git/herbert/crypto-2.6
Pull crypto fix from Herbert Xu:
"This fixes a crash in the crypto layer exposed by an SCTP test tool"
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: algboss - Hold ref count on larval
Linus Torvalds [Sat, 29 Jun 2013 18:32:05 +0000 (11:32 -0700)]
Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux
Pull drm/qxl fix from Dave Airlie:
"Bad me forgot an access check, possible security issue, but since this
is the first kernel with it, should be fine to just put it in now"
* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux:
drm/qxl: add missing access check for execbuffer ioctl
Mathieu Desnoyers [Fri, 28 Jun 2013 13:49:46 +0000 (09:49 -0400)]
Fix: kernel/ptrace.c: ptrace_peek_siginfo() missing __put_user() validation
This __put_user() could be used by unprivileged processes to write into
kernel memory. The issue here is that even if copy_siginfo_to_user()
fails, the error code is not checked before __put_user() is executed.
Luckily, ptrace_peek_siginfo() has been added within the 3.10-rc cycle,
so it has not hit a stable release yet.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Cc: Andrey Vagin <avagin@openvz.org>
Cc: Roland McGrath <roland@redhat.com>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Pavel Emelyanov <xemul@parallels.com>
Cc: Pedro Alves <palves@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Sat, 29 Jun 2013 17:31:15 +0000 (10:31 -0700)]
Merge branch 'for-linus' of git://git./linux/kernel/git/sage/ceph-client
Pull Ceph fix from Sage Weil:
"This is a recently spotted regression in the snapshot behavior...
It turns out several tests weren't being run in the nightlies so this
took a while to spot"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client:
rbd: send snapshot context with writes
Linus Torvalds [Sat, 29 Jun 2013 17:30:31 +0000 (10:30 -0700)]
Merge branch 'for-linus' of git://git./linux/kernel/git/viro/vfs
Pull ubifs fixes from Al Viro:
"A couple of ubifs readdir/lseek race fixes. Stable fodder, really
nasty..."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
UBIFS: fix a horrid bug
UBIFS: prepare to fix a horrid bug
Linus Torvalds [Sat, 29 Jun 2013 17:28:52 +0000 (10:28 -0700)]
Merge tag 'for-linus-
20130628' of git://git./linux/kernel/git/dhowells/linux-mn10300
Pull two MN10300 fixes from David Howells:
"The first fixes a problem with passing arrays rather than pointers to
get_user() where __typeof__ then wants to declare and initialise an
array variable which gcc doesn't like.
The second fixes a problem whereby putting mem=xxx into the kernel
command line causes init=xxx to get an incorrect value."
* tag 'for-linus-
20130628' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-mn10300:
mn10300: Use early_param() to parse "mem=" parameter
mn10300: Allow to pass array name to get_user()
Linus Torvalds [Sat, 29 Jun 2013 17:27:19 +0000 (10:27 -0700)]
Merge branch 'timers-urgent-for-linus' of git://git./linux/kernel/git/tip/tip
Pull timer fix from Thomas Gleixner:
"Correct an ordering issue in the tick broadcast code. I really wish
we'd get compensation for pain and suffering for each line of code we
write to work around dysfunctional timer hardware."
* 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
tick: Fix tick_broadcast_pending_mask not cleared
Linus Torvalds [Sat, 29 Jun 2013 17:26:50 +0000 (10:26 -0700)]
Merge branch 'perf-urgent-for-linus' of git://git./linux/kernel/git/tip/tip
Pull perf fix from Ingo Molnar:
"One more fix for a recently discovered bug"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf: Disable monitoring on setuid processes for regular users
Artem Bityutskiy [Fri, 28 Jun 2013 11:15:15 +0000 (14:15 +0300)]
UBIFS: fix a horrid bug
Al Viro pointed me to the fact that '->readdir()' and '->llseek()' have no
mutual exclusion, which means the 'ubifs_dir_llseek()' can be run while we are
in the middle of 'ubifs_readdir()'.
This means that 'file->private_data' can be freed while 'ubifs_readdir()' uses
it, and this is a very bad bug: not only 'ubifs_readdir()' can return garbage,
but this may corrupt memory and lead to all kinds of problems like crashes an
security holes.
This patch fixes the problem by using the 'file->f_version' field, which
'->llseek()' always unconditionally sets to zero. We set it to 1 in
'ubifs_readdir()' and whenever we detect that it became 0, we know there was a
seek and it is time to clear the state saved in 'file->private_data'.
I tested this patch by writing a user-space program which runds readdir and
seek in parallell. I could easily crash the kernel without these patches, but
could not crash it with these patches.
Cc: stable@vger.kernel.org
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Tested-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Artem Bityutskiy [Fri, 28 Jun 2013 11:15:14 +0000 (14:15 +0300)]
UBIFS: prepare to fix a horrid bug
Al Viro pointed me to the fact that '->readdir()' and '->llseek()' have no
mutual exclusion, which means the 'ubifs_dir_llseek()' can be run while we are
in the middle of 'ubifs_readdir()'.
First of all, this means that 'file->private_data' can be freed while
'ubifs_readdir()' uses it. But this particular patch does not fix the problem.
This patch is only a preparation, and the fix will follow next.
In this patch we make 'ubifs_readdir()' stop using 'file->f_pos' directly,
because 'file->f_pos' can be changed by '->llseek()' at any point. This may
lead 'ubifs_readdir()' to returning inconsistent data: directory entry names
may correspond to incorrect file positions.
So here we introduce a local variable 'pos', read 'file->f_pose' once at very
the beginning, and then stick to 'pos'. The result of this is that when
'ubifs_dir_llseek()' changes 'file->f_pos' while we are in the middle of
'ubifs_readdir()', the latter "wins".
Cc: stable@vger.kernel.org
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Tested-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Akira Takeuchi [Fri, 28 Jun 2013 15:53:03 +0000 (16:53 +0100)]
mn10300: Use early_param() to parse "mem=" parameter
This fixes the problem that "init=" options may not be passed to kernel
correctly.
parse_mem_cmdline() of mn10300 arch gets rid of "mem=" string from
redboot_command_line. Then init_setup() parses the "init=" options from
static_command_line, which is a copy of redboot_command_line, and keeps
the pointer to the init options in execute_command variable.
Since the commit
026cee0 upstream (params: <level>_initcall-like kernel
parameters), static_command_line becomes overwritten by saved_command_line at
do_initcall_level(). Notice that saved_command_line is a command line
which includes "mem=" string.
As a result, execute_command may point to weird string by the length of
"mem=" parameter.
I noticed this problem when using the command line like this:
mem=128M console=ttyS0,115200 init=/bin/sh
Here is the processing flow of command line parameters.
start_kernel()
setup_arch(&command_line)
parse_mem_cmdline(cmdline_p)
* strcpy(boot_command_line, redboot_command_line);
* Remove "mem=xxx" from redboot_command_line.
* *cmdline_p = redboot_command_line;
setup_command_line(command_line) <-- command_line is redboot_command_line
* strcpy(saved_command_line, boot_command_line)
* strcpy(static_command_line, command_line)
parse_early_param()
strlcpy(tmp_cmdline, boot_command_line, COMMAND_LINE_SIZE);
parse_early_options(tmp_cmdline);
parse_args("early options", cmdline, NULL, 0, 0, 0, do_early_param);
parse_args("Booting ..", static_command_line, ...);
init_setup() <-- save the pointer in execute_command
rest_init()
kernel_thread(kernel_init, NULL, CLONE_FS | CLONE_SIGHAND);
At this point, execute_command points to "/bin/sh" string.
kernel_init()
kernel_init_freeable()
do_basic_setup()
do_initcalls()
do_initcall_level()
(*) strcpy(static_command_line, saved_command_line);
Here, execute_command gets to point to "200" string !!
Signed-off-by: David Howells <dhowells@redhat.com>
Akira Takeuchi [Fri, 28 Jun 2013 15:53:01 +0000 (16:53 +0100)]
mn10300: Allow to pass array name to get_user()
This fixes the following compile error:
CC block/scsi_ioctl.o
block/scsi_ioctl.c: In function 'sg_scsi_ioctl':
block/scsi_ioctl.c:449: error: invalid initializer
Signed-off-by: David Howells <dhowells@redhat.com>
Dave Airlie [Fri, 28 Jun 2013 03:27:40 +0000 (13:27 +1000)]
drm/qxl: add missing access check for execbuffer ioctl
Reported-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Thadeu Lima de Souza Cascardo [Thu, 27 Jun 2013 21:00:10 +0000 (18:00 -0300)]
powerpc/eeh: Add eeh_dev to the cache during boot
commit
f8f7d63fd96ead101415a1302035137a866f8998 ("powerpc/eeh: Trace eeh
device from I/O cache") broke EEH on pseries for devices that were
present during boot and have not been hotplugged/DLPARed.
eeh_check_failure will get the eeh_dev from the cache, and will get
NULL. eeh_addr_cache_build adds the addresses to the cache, but eeh_dev
for the giving pci_device is not set yet. Just reordering the call to
eeh_addr_cache_insert_dev works fine. The ordering is similar to the one
in eeh_add_device_late.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Acked-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Josh Durgin [Wed, 26 Jun 2013 19:56:17 +0000 (12:56 -0700)]
rbd: send snapshot context with writes
Sending the right snapshot context with each write is required for
snapshots to work. Due to the ordering of calls, the snapshot context
is never set for any requests. This causes writes to the current
version of the image to be reflected in all snapshots, which are
supposed to be read-only.
This happens because rbd_osd_req_format_write() sets the snapshot
context based on obj_request->img_request. At this point, however,
obj_request->img_request has not been set yet, to the snapshot context
is set to NULL. Fix this by moving rbd_img_obj_request_add(), which
sets obj_request->img_request, before the osd request formatting
calls.
This resolves:
http://tracker.ceph.com/issues/5465
Reported-by: Karol Jurak <karol.jurak@gmail.com>
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <elder@linaro.org>