Anton Blanchard [Tue, 2 Feb 2010 22:46:13 +0000 (14:46 -0800)]
sched: cpuacct: Use bigger percpu counter batch values for stats counters
commit
fa535a77bd3fa32b9215ba375d6a202fe73e1dd6 upstream
When CONFIG_VIRT_CPU_ACCOUNTING and CONFIG_CGROUP_CPUACCT are
enabled we can call cpuacct_update_stats with values much larger
than percpu_counter_batch. This means the call to
percpu_counter_add will always add to the global count which is
protected by a spinlock and we end up with a global spinlock in
the scheduler.
Based on an idea by KOSAKI Motohiro, this patch scales the batch
value by cputime_one_jiffy such that we have the same batch
limit as we would if CONFIG_VIRT_CPU_ACCOUNTING was disabled.
His patch did this once at boot but that initialisation happened
too early on PowerPC (before time_init) and it was never updated
at runtime as a result of a hotplug cpu add/remove.
This patch instead scales percpu_counter_batch by
cputime_one_jiffy at runtime, which keeps the batch correct even
after cpu hotplug operations. We cap it at INT_MAX in case of
overflow.
For architectures that do not support
CONFIG_VIRT_CPU_ACCOUNTING, cputime_one_jiffy is the constant 1
and gcc is smart enough to optimise min(s32
percpu_counter_batch, INT_MAX) to just percpu_counter_batch at
least on x86 and PowerPC. So there is no need to add an #ifdef.
On a 64 thread PowerPC box with CONFIG_VIRT_CPU_ACCOUNTING and
CONFIG_CGROUP_CPUACCT enabled, a context switch microbenchmark
is 234x faster and almost matches a CONFIG_CGROUP_CPUACCT
disabled kernel:
CONFIG_CGROUP_CPUACCT disabled:
16906698 ctx switches/sec
CONFIG_CGROUP_CPUACCT enabled: 61720 ctx switches/sec
CONFIG_CGROUP_CPUACCT + patch:
16663217 ctx switches/sec
Tested with:
wget http://ozlabs.org/~anton/junkcode/context_switch.c
make context_switch
for i in `seq 0 63`; do taskset -c $i ./context_switch & done
vmstat 1
Signed-off-by: Anton Blanchard <anton@samba.org>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>
Tested-by: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Tony Luck <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Suresh Siddha [Wed, 31 Mar 2010 23:47:45 +0000 (16:47 -0700)]
sched: Fix select_idle_sibling() logic in select_task_rq_fair()
commit
99bd5e2f245d8cd17d040c82d40becdb3efd9b69 upstream
Issues in the current select_idle_sibling() logic in select_task_rq_fair()
in the context of a task wake-up:
a) Once we select the idle sibling, we use that domain (spanning the cpu that
the task is currently woken-up and the idle sibling that we found) in our
wake_affine() decisions. This domain is completely different from the
domain(we are supposed to use) that spans the cpu that the task currently
woken-up and the cpu where the task previously ran.
b) We do select_idle_sibling() check only for the cpu that the task is
currently woken-up on. If select_task_rq_fair() selects the previously run
cpu for waking the task, doing a select_idle_sibling() check
for that cpu also helps and we don't do this currently.
c) In the scenarios where the cpu that the task is woken-up is busy but
with its HT siblings are idle, we are selecting the task be woken-up
on the idle HT sibling instead of a core that it previously ran
and currently completely idle. i.e., we are not taking decisions based on
wake_affine() but directly selecting an idle sibling that can cause
an imbalance at the SMT/MC level which will be later corrected by the
periodic load balancer.
Fix this by first going through the load imbalance calculations using
wake_affine() and once we make a decision of woken-up cpu vs previously-ran cpu,
then choose a possible idle sibling for waking up the task on.
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
1270079265.7835.8.camel@sbs-t61.sc.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Fri, 16 Apr 2010 12:59:29 +0000 (14:59 +0200)]
sched: Pre-compute cpumask_weight(sched_domain_span(sd))
commit
669c55e9f99b90e46eaa0f98a67ec53d46dc969a upstream
Dave reported that his large SPARC machines spend lots of time in
hweight64(), try and optimize some of those needless cpumask_weight()
invocations (esp. with the large offstack cpumasks these are very
expensive indeed).
Reported-by: David Miller <davem@davemloft.net>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Mike Galbraith [Thu, 11 Mar 2010 16:17:16 +0000 (17:17 +0100)]
sched: Fix select_idle_sibling()
commit
8b911acdf08477c059d1c36c21113ab1696c612b upstream
Don't bother with selection when the current cpu is idle. Recent load
balancing changes also make it no longer necessary to check wake_affine()
success before returning the selected sibling, so we now always use it.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
1268301369.6785.36.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Mike Galbraith [Mon, 4 Jan 2010 13:44:56 +0000 (14:44 +0100)]
sched: Fix vmark regression on big machines
commit
50b926e439620c469565e8be0f28be78f5fca1ce upstream
SD_PREFER_SIBLING is set at the CPU domain level if power saving isn't
enabled, leading to many cache misses on large machines as we traverse
looking for an idle shared cache to wake to. Change the enabler of
select_idle_sibling() to SD_SHARE_PKG_RESOURCES, and enable same at the
sibling domain level.
Reported-by: Lin Ming <ming.m.lin@intel.com>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
1262612696.15495.15.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Thu, 12 Nov 2009 14:55:29 +0000 (15:55 +0100)]
sched: More generic WAKE_AFFINE vs select_idle_sibling()
commit
fe3bcfe1f6c1fc4ea7706ac2d05e579fd9092682 upstream
Instead of only considering SD_WAKE_AFFINE | SD_PREFER_SIBLING
domains also allow all SD_PREFER_SIBLING domains below a
SD_WAKE_AFFINE domain to change the affinity target.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <
20091112145610.
909723612@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Thu, 12 Nov 2009 14:55:28 +0000 (15:55 +0100)]
sched: Cleanup select_task_rq_fair()
commit
a50bde5130f65733142b32975616427d0ea50856 upstream
Clean up the new affine to idle sibling bits while trying to
grok them. Should not have any function differences.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <
20091112145610.
832503781@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Daniel J Blueman [Tue, 1 Jun 2010 13:06:13 +0000 (14:06 +0100)]
sched: apply RCU protection to wake_affine()
commit
f3b577dec1f2ce32d2db6d2ca6badff7002512af upstream
The task_group() function returns a pointer that must be protected
by either RCU, the ->alloc_lock, or the cgroup lock (see the
rcu_dereference_check() in task_subsys_state(), which is invoked by
task_group()). The wake_affine() function currently does none of these,
which means that a concurrent update would be within its rights to free
the structure returned by task_group(). Because wake_affine() uses this
structure only to compute load-balancing heuristics, there is no reason
to acquire either of the two locks.
Therefore, this commit introduces an RCU read-side critical section that
starts before the first call to task_group() and ends after the last use
of the "tg" pointer returned from task_group(). Thanks to Li Zefan for
pointing out the need to extend the RCU read-side critical section from
that proposed by the original patch.
Signed-off-by: Daniel J Blueman <daniel.blueman@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Tue, 1 Dec 2009 11:21:47 +0000 (12:21 +0100)]
sched: Remove unnecessary RCU exclusion
commit
fb58bac5c75bfff8bbf7d02071a10a62f32fe28b upstream
As Nick pointed out, and realized by myself when doing:
sched: Fix balance vs hotplug race
the patch:
sched: for_each_domain() vs RCU
is wrong, sched_domains are freed after synchronize_sched(), which
means disabling preemption is enough.
Reported-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Thu, 19 Aug 2010 11:31:43 +0000 (13:31 +0200)]
sched: Fix rq->clock synchronization when migrating tasks
commit
861d034ee814917a83bd5de4b26e3b8336ddeeb8 upstream
sched_fork() -- we do task placement in ->task_fork_fair() ensure we
update_rq_clock() so we work with current time. We leave the vruntime
in relative state, so the time delay until wake_up_new_task() doesn't
matter.
wake_up_new_task() -- Since task_fork_fair() left p->vruntime in
relative state we can safely migrate, the activate_task() on the
remote rq will call update_rq_clock() and causes the clock to be
synced (enough).
Tested-by: Jack Daniel <wanders.thirst@gmail.com>
Tested-by: Philby John <pjohn@mvista.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
1281002322.1923.1708.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Fri, 26 Mar 2010 11:22:14 +0000 (12:22 +0100)]
sched: Fix nr_uninterruptible count
commit
cc87f76a601d2d256118f7bab15e35254356ae21 upstream
The cpuload calculation in calc_load_account_active() assumes
rq->nr_uninterruptible will not change on an offline cpu after
migrate_nr_uninterruptible(). However the recent migrate on wakeup
changes broke that and would result in decrementing the offline cpu's
rq->nr_uninterruptible.
Fix this by accounting the nr_uninterruptible on the waking cpu.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Thu, 25 Mar 2010 20:05:16 +0000 (21:05 +0100)]
sched: Optimize task_rq_lock()
commit
65cc8e4859ff29a9ddc989c88557d6059834c2a2 upstream
Now that we hold the rq->lock over set_task_cpu() again, we can do
away with most of the TASK_WAKING checks and reduce them again to
set_cpus_allowed_ptr().
Removes some conditionals from scheduling hot-paths.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Oleg Nesterov <oleg@redhat.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Wed, 24 Mar 2010 17:34:10 +0000 (18:34 +0100)]
sched: Fix TASK_WAKING vs fork deadlock
commit
0017d735092844118bef006696a750a0e4ef6ebd upstream
Oleg noticed a few races with the TASK_WAKING usage on fork.
- since TASK_WAKING is basically a spinlock, it should be IRQ safe
- since we set TASK_WAKING (*) without holding rq->lock it could
be there still is a rq->lock holder, thereby not actually
providing full serialization.
(*) in fact we clear PF_STARTING, which in effect enables TASK_WAKING.
Cure the second issue by not setting TASK_WAKING in sched_fork(), but
only temporarily in wake_up_new_task() while calling select_task_rq().
Cure the first by holding rq->lock around the select_task_rq() call,
this will disable IRQs, this however requires that we push down the
rq->lock release into select_task_rq_fair()'s cgroup stuff.
Because select_task_rq_fair() still needs to drop the rq->lock we
cannot fully get rid of TASK_WAKING.
Reported-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Oleg Nesterov [Mon, 15 Mar 2010 09:10:27 +0000 (10:10 +0100)]
sched: Make select_fallback_rq() cpuset friendly
commit
9084bb8246ea935b98320554229e2f371f7f52fa upstream
Introduce cpuset_cpus_allowed_fallback() helper to fix the cpuset problems
with select_fallback_rq(). It can be called from any context and can't use
any cpuset locks including task_lock(). It is called when the task doesn't
have online cpus in ->cpus_allowed but ttwu/etc must be able to find a
suitable cpu.
I am not proud of this patch. Everything which needs such a fat comment
can't be good even if correct. But I'd prefer to not change the locking
rules in the code I hardly understand, and in any case I believe this
simple change make the code much more correct compared to deadlocks we
currently have.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
20100315091027.GA9155@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Oleg Nesterov [Mon, 15 Mar 2010 09:10:23 +0000 (10:10 +0100)]
sched: _cpu_down(): Don't play with current->cpus_allowed
commit
6a1bdc1b577ebcb65f6603c57f8347309bc4ab13 upstream
_cpu_down() changes the current task's affinity and then recovers it at
the end. The problems are well known: we can't restore old_allowed if it
was bound to the now-dead-cpu, and we can race with the userspace which
can change cpu-affinity during unplug.
_cpu_down() should not play with current->cpus_allowed at all. Instead,
take_cpu_down() can migrate the caller of _cpu_down() after __cpu_disable()
removes the dying cpu from cpu_online_mask.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
20100315091023.GA9148@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Oleg Nesterov [Mon, 15 Mar 2010 09:10:19 +0000 (10:10 +0100)]
sched: sched_exec(): Remove the select_fallback_rq() logic
commit
30da688ef6b76e01969b00608202fff1eed2accc upstream.
sched_exec()->select_task_rq() reads/updates ->cpus_allowed lockless.
This can race with other CPUs updating our ->cpus_allowed, and this
looks meaningless to me.
The task is current and running, it must have online cpus in ->cpus_allowed,
the fallback mode is bogus. And, if ->sched_class returns the "wrong" cpu,
this likely means we raced with set_cpus_allowed() which was called
for reason, why should sched_exec() retry and call ->select_task_rq()
again?
Change the code to call sched_class->select_task_rq() directly and do
nothing if the returned cpu is wrong after re-checking under rq->lock.
From now task_struct->cpus_allowed is always stable under TASK_WAKING,
select_fallback_rq() is always called under rq-lock or the caller or
the caller owns TASK_WAKING (select_task_rq).
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
20100315091019.GA9141@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Oleg Nesterov [Mon, 15 Mar 2010 09:10:14 +0000 (10:10 +0100)]
sched: move_task_off_dead_cpu(): Remove retry logic
commit
c1804d547dc098363443667609c272d1e4d15ee8 upstream
The previous patch preserved the retry logic, but it looks unneeded.
__migrate_task() can only fail if we raced with migration after we dropped
the lock, but in this case the caller of set_cpus_allowed/etc must initiate
migration itself if ->on_rq == T.
We already fixed p->cpus_allowed, the changes in active/online masks must
be visible to racer, it should migrate the task to online cpu correctly.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
20100315091014.GA9138@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Oleg Nesterov [Mon, 15 Mar 2010 09:10:10 +0000 (10:10 +0100)]
sched: move_task_off_dead_cpu(): Take rq->lock around select_fallback_rq()
commit
1445c08d06c5594895b4fae952ef8a457e89c390 upstream
move_task_off_dead_cpu()->select_fallback_rq() reads/updates ->cpus_allowed
lockless. We can race with set_cpus_allowed() running in parallel.
Change it to take rq->lock around select_fallback_rq(). Note that it is not
trivial to move this spin_lock() into select_fallback_rq(), we must recheck
the task was not migrated after we take the lock and other callers do not
need this lock.
To avoid the races with other callers of select_fallback_rq() which rely on
TASK_WAKING, we also check p->state != TASK_WAKING and do nothing otherwise.
The owner of TASK_WAKING must update ->cpus_allowed and choose the correct
CPU anyway, and the subsequent __migrate_task() is just meaningless because
p->se.on_rq must be false.
Alternatively, we could change select_task_rq() to take rq->lock right
after it calls sched_class->select_task_rq(), but this looks a bit ugly.
Also, change it to not assume irqs are disabled and absorb __migrate_task_irq().
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
20100315091010.GA9131@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Oleg Nesterov [Mon, 15 Mar 2010 09:10:03 +0000 (10:10 +0100)]
sched: Kill the broken and deadlockable cpuset_lock/cpuset_cpus_allowed_locked code
commit
897f0b3c3ff40b443c84e271bef19bd6ae885195 upstream
This patch just states the fact the cpusets/cpuhotplug interaction is
broken and removes the deadlockable code which only pretends to work.
- cpuset_lock() doesn't really work. It is needed for
cpuset_cpus_allowed_locked() but we can't take this lock in
try_to_wake_up()->select_fallback_rq() path.
- cpuset_lock() is deadlockable. Suppose that a task T bound to CPU takes
callback_mutex. If cpu_down(CPU) happens before T drops callback_mutex
stop_machine() preempts T, then migration_call(CPU_DEAD) tries to take
cpuset_lock() and hangs forever because CPU is already dead and thus
T can't be scheduled.
- cpuset_cpus_allowed_locked() is deadlockable too. It takes task_lock()
which is not irq-safe, but try_to_wake_up() can be called from irq.
Kill them, and change select_fallback_rq() to use cpu_possible_mask, like
we currently do without CONFIG_CPUSETS.
Also, with or without this patch, with or without CONFIG_CPUSETS, the
callers of select_fallback_rq() can race with each other or with
set_cpus_allowed() pathes.
The subsequent patches try to to fix these problems.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
20100315091003.GA9123@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Oleg Nesterov [Tue, 30 Mar 2010 16:58:29 +0000 (18:58 +0200)]
sched: set_cpus_allowed_ptr(): Don't use rq->migration_thread after unlock
commit
47a70985e5c093ae03d8ccf633c70a93761d86f2 upstream
Trivial typo fix. rq->migration_thread can be NULL after
task_rq_unlock(), this is why we have "mt" which should be
used instead.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
20100330165829.GA18284@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Thomas Gleixner [Wed, 20 Jan 2010 20:59:06 +0000 (20:59 +0000)]
sched: Queue a deboosted task to the head of the RT prio queue
commit
60db48cacb9b253d5607a5ff206112a59cd09e34 upstream
rtmutex_set_prio() is used to implement priority inheritance for
futexes. When a task is deboosted it gets enqueued at the tail of its
RT priority list. This is violating the POSIX scheduling semantics:
rt priority list X contains two runnable tasks A and B
task A runs with priority X and holds mutex M
task C preempts A and is blocked on mutex M
-> task A is boosted to priority of task C (Y)
task A unlocks the mutex M and deboosts itself
-> A is dequeued from rt priority list Y
-> A is enqueued to the tail of rt priority list X
task C schedules away
task B runs
This is wrong as task A did not schedule away and therefor violates
the POSIX scheduling semantics.
Enqueue the task to the head of the priority list instead.
Reported-by: Mathias Weber <mathias.weber.mw1@roche.com>
Reported-by: Carsten Emde <cbe@osadl.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Tested-by: Carsten Emde <cbe@osadl.org>
Tested-by: Mathias Weber <mathias.weber.mw1@roche.com>
LKML-Reference: <
20100120171629.
809074113@linutronix.de>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Thomas Gleixner [Wed, 20 Jan 2010 20:59:01 +0000 (20:59 +0000)]
sched: Implement head queueing for sched_rt
commit
37dad3fce97f01e5149d69de0833d8452c0e862e upstream
The ability of enqueueing a task to the head of a SCHED_FIFO priority
list is required to fix some violations of POSIX scheduling policy.
Implement the functionality in sched_rt.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Tested-by: Carsten Emde <cbe@osadl.org>
Tested-by: Mathias Weber <mathias.weber.mw1@roche.com>
LKML-Reference: <
20100120171629.
772169931@linutronix.de>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Thomas Gleixner [Wed, 20 Jan 2010 20:58:57 +0000 (20:58 +0000)]
sched: Extend enqueue_task to allow head queueing
commit
ea87bb7853168434f4a82426dd1ea8421f9e604d upstream
The ability of enqueueing a task to the head of a SCHED_FIFO priority
list is required to fix some violations of POSIX scheduling policy.
Extend the related functions with a "head" argument.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Tested-by: Carsten Emde <cbe@osadl.org>
Tested-by: Mathias Weber <mathias.weber.mw1@roche.com>
LKML-Reference: <
20100120171629.
734886007@linutronix.de>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Mon, 15 Feb 2010 13:45:54 +0000 (14:45 +0100)]
sched: Fix race between ttwu() and task_rq_lock()
commit
0970d2992dfd7d5ec2c787417cf464f01eeaf42a upstream
Thomas found that due to ttwu() changing a task's cpu without holding
the rq->lock, task_rq_lock() might end up locking the wrong rq.
Avoid this by serializing against TASK_WAKING.
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
1266241712.15770.420.camel@laptop>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Thu, 21 Jan 2010 15:34:27 +0000 (16:34 +0100)]
sched: Fix incorrect sanity check
commit
11854247e2c851e7ff9ce138e501c6cffc5a4217 upstream
We moved to migrate on wakeup, which means that sleeping tasks could
still be present on offline cpus. Amend the check to only test running
tasks.
Reported-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Thu, 21 Jan 2010 20:04:57 +0000 (21:04 +0100)]
sched: Fix fork vs hotplug vs cpuset namespaces
commit
fabf318e5e4bda0aca2b0d617b191884fda62703 upstream
There are a number of issues:
1) TASK_WAKING vs cgroup_clone (cpusets)
copy_process():
sched_fork()
child->state = TASK_WAKING; /* waiting for wake_up_new_task() */
if (current->nsproxy != p->nsproxy)
ns_cgroup_clone()
cgroup_clone()
mutex_lock(inode->i_mutex)
mutex_lock(cgroup_mutex)
cgroup_attach_task()
ss->can_attach()
ss->attach() [ -> cpuset_attach() ]
cpuset_attach_task()
set_cpus_allowed_ptr();
while (child->state == TASK_WAKING)
cpu_relax();
will deadlock the system.
2) cgroup_clone (cpusets) vs copy_process
So even if the above would work we still have:
copy_process():
if (current->nsproxy != p->nsproxy)
ns_cgroup_clone()
cgroup_clone()
mutex_lock(inode->i_mutex)
mutex_lock(cgroup_mutex)
cgroup_attach_task()
ss->can_attach()
ss->attach() [ -> cpuset_attach() ]
cpuset_attach_task()
set_cpus_allowed_ptr();
...
p->cpus_allowed = current->cpus_allowed
over-writing the modified cpus_allowed.
3) fork() vs hotplug
if we unplug the child's cpu after the sanity check when the child
gets attached to the task_list but before wake_up_new_task() shit
will meet with fan.
Solve all these issues by moving fork cpu selection into
wake_up_new_task().
Reported-by: Serge E. Hallyn <serue@us.ibm.com>
Tested-by: Serge E. Hallyn <serue@us.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
1264106190.4283.1314.camel@laptop>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Sun, 20 Dec 2009 16:36:27 +0000 (17:36 +0100)]
sched: Fix hotplug hang
commit
70f1120527797adb31c68bdc6f1b45e182c342c7 upstream
The hot-unplug kstopmachine usage does a wakeup after
deactivating the cpu, hence we cannot use cpu_active()
here but must rely on the good olde online.
Reported-by: Sachin Sant <sachinp@in.ibm.com>
Reported-by: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Tested-by: Jens Axboe <jens.axboe@oracle.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
LKML-Reference: <
1261326987.4314.24.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Wed, 16 Dec 2009 17:04:41 +0000 (18:04 +0100)]
sched: Remove the cfs_rq dependency from set_task_cpu()
commit
88ec22d3edb72b261f8628226cd543589a6d5e1b upstream
In order to remove the cfs_rq dependency from set_task_cpu() we
need to ensure the task is cfs_rq invariant for all callsites.
The simple approach is to substract cfs_rq->min_vruntime from
se->vruntime on dequeue, and add cfs_rq->min_vruntime on
enqueue.
However, this has the downside of breaking FAIR_SLEEPERS since
we loose the old vruntime as we only maintain the relative
position.
To solve this, we observe that we only migrate runnable tasks,
we do this using deactivate_task(.sleep=0) and
activate_task(.wakeup=0), therefore we can restrain the
min_vruntime invariance to that state.
The only other case is wakeup balancing, since we want to
maintain the old vruntime we cannot make it relative on dequeue,
but since we don't migrate inactive tasks, we can do so right
before we activate it again.
This is where we need the new pre-wakeup hook, we need to call
this while still holding the old rq->lock. We could fold it into
->select_task_rq(), but since that has multiple callsites and
would obfuscate the locking requirements, that seems like a
fudge.
This leaves the fork() case, simply make sure that ->task_fork()
leaves the ->vruntime in a relative state.
This covers all cases where set_task_cpu() gets called, and
ensures it sees a relative vruntime.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <
20091216170518.
191697025@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Wed, 16 Dec 2009 17:04:40 +0000 (18:04 +0100)]
sched: Add pre and post wakeup hooks
commit
efbbd05a595343a413964ad85a2ad359b7b7efbd upstream
As will be apparent in the next patch, we need a pre wakeup hook
for sched_fair task migration, hence rename the post wakeup hook
and one pre wakeup.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <
20091216170518.
114746117@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Wed, 16 Dec 2009 17:04:38 +0000 (18:04 +0100)]
sched: Fix select_task_rq() vs hotplug issues
commit
5da9a0fb673a0ea0a093862f95f6b89b3390c31e upstream
Since select_task_rq() is now responsible for guaranteeing
->cpus_allowed and cpu_active_mask, we need to verify this.
select_task_rq_rt() can blindly return
smp_processor_id()/task_cpu() without checking the valid masks,
select_task_rq_fair() can do the same in the rare case that all
SD_flags are disabled.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <
20091216170517.
961475466@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Wed, 16 Dec 2009 17:04:37 +0000 (18:04 +0100)]
sched: Fix sched_exec() balancing
commit
3802290628348674985d14914f9bfee7b9084548 upstream
sched: Fix sched_exec() balancing
Since we access ->cpus_allowed without holding rq->lock we need
a retry loop to validate the result, this comes for near free
when we merge sched_migrate_task() into sched_exec() since that
already does the needed check.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <
20091216170517.
884743662@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Thu, 17 Dec 2009 12:16:31 +0000 (13:16 +0100)]
sched: Fix broken assertion
commit
077614ee1e93245a3b9a4e1213659405dbeb0ba6 upstream
There's a preemption race in the set_task_cpu() debug check in
that when we get preempted after setting task->state we'd still
be on the rq proper, but fail the test.
Check for preempted tasks, since those are always on the RQ.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
20091217121830.
137155561@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Ingo Molnar [Thu, 17 Dec 2009 05:05:49 +0000 (06:05 +0100)]
sched: Make warning less noisy
commit
416eb39556a03d1c7e52b0791e9052ccd71db241 upstream
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <
20091216170517.
807938893@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Wed, 16 Dec 2009 17:04:36 +0000 (18:04 +0100)]
sched: Ensure set_task_cpu() is never called on blocked tasks
commit
e2912009fb7b715728311b0d8fe327a1432b3f79 upstream
In order to clean up the set_task_cpu() rq dependencies we need
to ensure it is never called on blocked tasks because such usage
does not pair with consistent rq->lock usage.
This puts the migration burden on ttwu().
Furthermore we need to close a race against changing
->cpus_allowed, since select_task_rq() runs with only preemption
disabled.
For sched_fork() this is safe because the child isn't in the
tasklist yet, for wakeup we fix this by synchronizing
set_cpus_allowed_ptr() against TASK_WAKING, which leaves
sched_exec to be a problem
This also closes a hole in (
6ad4c1888 sched: Fix balance vs
hotplug race) where ->select_task_rq() doesn't validate the
result against the sched_domain/root_domain.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <
20091216170517.
807938893@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Wed, 16 Dec 2009 17:04:35 +0000 (18:04 +0100)]
sched: Use TASK_WAKING for fork wakups
commit
06b83b5fbea273672822b6ee93e16781046553ec upstream
For later convenience use TASK_WAKING for fresh tasks.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <
20091216170517.
732561278@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Xiaotian Feng [Wed, 16 Dec 2009 17:04:32 +0000 (18:04 +0100)]
sched: Fix set_cpu_active() in cpu_down()
commit
9ee349ad6d326df3633d43f54202427295999c47 upstream
Sachin found cpu hotplug test failures on powerpc, which made
the kernel hang on his POWER box.
The problem is that we fail to re-activate a cpu when a
hot-unplug fails. Fix this by moving the de-activation into
_cpu_down after doing the initial checks.
Remove the synchronize_sched() calls and rely on those implied
by rebuilding the sched domains using the new mask.
Reported-by: Sachin Sant <sachinp@in.ibm.com>
Signed-off-by: Xiaotian Feng <dfeng@redhat.com>
Tested-by: Sachin Sant <sachinp@in.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <
20091216170517.
500272612@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Thomas Gleixner [Wed, 9 Dec 2009 10:15:11 +0000 (10:15 +0000)]
sched: Use rcu in sched_get_rr_param()
commit
1a551ae715825bb2a2107a2dd68de024a1fa4e32 upstream
read_lock(&tasklist_lock) does not protect
sys_sched_get_rr_param() against a concurrent update of the
policy or scheduler parameters as do_sched_scheduler() does not
take the tasklist_lock.
The access to task->sched_class->get_rr_interval is protected by
task_rq_lock(task).
Use rcu_read_lock() to protect find_task_by_vpid() and prevent
the task struct from going away.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <
20091209100706.
862897167@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Thomas Gleixner [Wed, 9 Dec 2009 10:15:01 +0000 (10:15 +0000)]
sched: Use rcu in sched_get/set_affinity()
commit
23f5d142519621b16cf2b378cf8adf4dcf01a616 upstream
tasklist_lock is held read locked to protect the
find_task_by_vpid() call and to prevent the task going away.
sched_setaffinity acquires a task struct ref and drops tasklist
lock right away. The access to the cpus_allowed mask is
protected by rq->lock.
rcu_read_lock() provides the same protection here.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <
20091209100706.
789059966@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Thomas Gleixner [Wed, 9 Dec 2009 10:14:58 +0000 (10:14 +0000)]
sched: Use rcu in sys_sched_getscheduler/sys_sched_getparam()
commit
5fe85be081edf0ac92d83f9c39e0ab5c1371eb82 upstream
read_lock(&tasklist_lock) does not protect
sys_sched_getscheduler and sys_sched_getparam() against a
concurrent update of the policy or scheduler parameters as
do_sched_setscheduler() does not take the tasklist_lock. The
accessed integers can be retrieved w/o locking and are snapshots
anyway.
Using rcu_read_lock() to protect find_task_by_vpid() and prevent
the task struct from going away is not changing the above
situation.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <
20091209100706.
753790977@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Rafael J.Wysocki [Sat, 12 Dec 2009 23:07:30 +0000 (00:07 +0100)]
sched: Make wakeup side and atomic variants of completion API irq safe
commit
7539a3b3d1f892dd97eaf094134d7de55c13befe upstream
Alan Stern noticed that all the wakeup side (and atomic) variants of the
completion APIs should be irq safe, but the newly introduced
completion_done() and try_wait_for_completion() aren't. The use of the
irq unsafe variants in IRQ contexts can cause crashes/hangs.
Fix the problem by making them use spin_lock_irqsave() and
spin_lock_irqrestore().
Reported-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Zhang Rui <rui.zhang@intel.com>
Cc: pm list <linux-pm@lists.linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: David Chinner <david@fromorbit.com>
Cc: Lachlan McIlroy <lachlan@sgi.com>
LKML-Reference: <
200912130007.30541.rjw@sisk.pl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Ingo Molnar [Thu, 10 Dec 2009 19:32:39 +0000 (20:32 +0100)]
sched: Remove forced2_migrations stats
commit
b9889ed1ddeca5a3f3569c8de7354e9e97d803ae upstream
This build warning:
kernel/sched.c: In function 'set_task_cpu':
kernel/sched.c:2070: warning: unused variable 'old_rq'
Made me realize that the forced2_migrations stat looks pretty
pointless (and a misnomer) - remove it.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Fri, 27 Nov 2009 16:32:46 +0000 (17:32 +0100)]
sched: Sanitize fork() handling
commit
cd29fe6f2637cc2ccbda5ac65f5332d6bf5fa3c6 upstream
Currently we try to do task placement in wake_up_new_task() after we do
the load-balance pass in sched_fork(). This yields complicated semantics
in that we have to deal with tasks on different RQs and the
set_task_cpu() calls in copy_process() and sched_fork()
Rename ->task_new() to ->task_fork() and call it from sched_fork()
before the balancing, this gives the policy a clear point to place the
task.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Fri, 27 Nov 2009 14:44:43 +0000 (15:44 +0100)]
sched: Clean up ttwu() rq locking
commit
ab19cb23313733c55e0517607844b86720b35f5f upstream
Since set_task_clock() doesn't rely on rq->clock anymore we can simplyfy
the mess in ttwu().
Optimize things a bit by not fiddling with the IRQ state there.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Fri, 27 Nov 2009 13:12:25 +0000 (14:12 +0100)]
sched: Remove rq->clock coupling from set_task_cpu()
commit
5afcdab706d6002cb02b567ba46e650215e694e8 upstream
set_task_cpu() should be rq invariant and only touch task state, it
currently fails to do so, which opens up a few races, since not all
callers hold both rq->locks.
Remove the relyance on rq->clock, as any site calling set_task_cpu()
should also do a remote clock update, which should ensure the observed
time between these two cpus is monotonic, as per
kernel/sched_clock.c:sched_clock_remote().
Therefore we can simply remove the clock_offset bits and be happy.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Hiroshi Shimamoto [Wed, 4 Nov 2009 07:16:54 +0000 (16:16 +0900)]
sched: Remove unused cpu_nr_migrations()
commit
9824a2b728b63e7ff586b9fd9293c819be79f0f3 upstream
cpu_nr_migrations() is not used, remove it.
Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
4AF12A66.
6020609@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Wed, 25 Nov 2009 12:31:39 +0000 (13:31 +0100)]
sched: Consolidate select_task_rq() callers
commit
970b13bacba14a8cef6f642861947df1d175b0b3 upstream
sched: Consolidate select_task_rq() callers
Small cleanup.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
[ v2: build fix ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Thomas Gleixner [Wed, 9 Dec 2009 08:32:03 +0000 (09:32 +0100)]
sched: Protect sched_rr_get_param() access to task->sched_class
commit
dba091b9e3522b9d32fc9975e48d3b69633b45f0 upstream
sched_rr_get_param calls
task->sched_class->get_rr_interval(task) without protection
against a concurrent sched_setscheduler() call which modifies
task->sched_class.
Serialize the access with task_rq_lock(task) and hand the rq
pointer into get_rr_interval() as it's needed at least in the
sched_fair implementation.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <alpine.LFD.2.00.
0912090930120.3089@localhost.localdomain>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Thomas Gleixner [Tue, 8 Dec 2009 20:24:16 +0000 (20:24 +0000)]
sched: Protect task->cpus_allowed access in sched_getaffinity()
commit
3160568371da441b7f2fb57f2f1225404207e8f2 upstream
sched_getaffinity() is not protected against a concurrent
modification of the tasks affinity.
Serialize the access with task_rq_lock(task).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <
20091208202026.
769251187@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Roland McGrath [Tue, 14 Sep 2010 19:22:58 +0000 (12:22 -0700)]
x86-64, compat: Retruncate rax after ia32 syscall entry tracing
commit
eefdca043e8391dcd719711716492063030b55ac upstream.
In commit
d4d6715, we reopened an old hole for a 64-bit ptracer touching a
32-bit tracee in system call entry. A %rax value set via ptrace at the
entry tracing stop gets used whole as a 32-bit syscall number, while we
only check the low 32 bits for validity.
Fix it by truncating %rax back to 32 bits after syscall_trace_enter,
in addition to testing the full 64 bits as has already been added.
Reported-by: Ben Hawkes <hawkes@sota.gen.nz>
Signed-off-by: Roland McGrath <roland@redhat.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
H. Peter Anvin [Tue, 7 Sep 2010 23:16:18 +0000 (16:16 -0700)]
compat: Make compat_alloc_user_space() incorporate the access_ok()
commit
c41d68a513c71e35a14f66d71782d27a79a81ea6 upstream.
compat_alloc_user_space() expects the caller to independently call
access_ok() to verify the returned area. A missing call could
introduce problems on some architectures.
This patch incorporates the access_ok() check into
compat_alloc_user_space() and also adds a sanity check on the length.
The existing compat_alloc_user_space() implementations are renamed
arch_compat_alloc_user_space() and are used as part of the
implementation of the new global function.
This patch assumes NULL will cause __get_user()/__put_user() to either
fail or access userspace on all architectures. This should be
followed by checking the return value of compat_access_user_space()
for NULL in the callers, at which time the access_ok() in the callers
can also be removed.
Reported-by: Ben Hawkes <hawkes@sota.gen.nz>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Chris Metcalf <cmetcalf@tilera.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Tony Luck <tony.luck@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: James Bottomley <jejb@parisc-linux.org>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
H. Peter Anvin [Tue, 14 Sep 2010 19:42:41 +0000 (12:42 -0700)]
x86-64, compat: Test %rax for the syscall number, not %eax
commit
36d001c70d8a0144ac1d038f6876c484849a74de upstream.
On 64 bits, we always, by necessity, jump through the system call
table via %rax. For 32-bit system calls, in theory the system call
number is stored in %eax, and the code was testing %eax for a valid
system call number. At one point we loaded the stored value back from
the stack to enforce zero-extension, but that was removed in checkin
d4d67150165df8bf1cc05e532f6efca96f907cab. An actual 32-bit process
will not be able to introduce a non-zero-extended number, but it can
happen via ptrace.
Instead of re-introducing the zero-extension, test what we are
actually going to use, i.e. %rax. This only adds a handful of REX
prefixes to the code.
Reported-by: Ben Hawkes <hawkes@sota.gen.nz>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Roland McGrath <roland@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Zijlstra [Fri, 10 Sep 2010 20:32:53 +0000 (22:32 +0200)]
x86, tsc: Fix a preemption leak in restore_sched_clock_state()
commit
55496c896b8a695140045099d4e0175cf09d4eae upstream.
Doh, a real life genuine preemption leak..
This caused a suspend failure.
Reported-bisected-and-tested-by-the-invaluable: Jeff Chua <jeff.chua.linux@gmail.com>
Acked-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Nico Schottelius <nico-linux-20100709@schottelius.org>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Florian Pritz <flo@xssn.at>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Len Brown <lenb@kernel.org>
sleep states
LKML-Reference: <
1284150773.402.122.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Johannes Berg [Mon, 30 Aug 2010 10:24:54 +0000 (12:24 +0200)]
wireless extensions: fix kernel heap content leak
commit
42da2f948d949efd0111309f5827bf0298bcc9a4 upstream.
Wireless extensions have an unfortunate, undocumented
requirement which requires drivers to always fill
iwp->length when returning a successful status. When
a driver doesn't do this, it leads to a kernel heap
content leak when userspace offers a larger buffer
than would have been necessary.
Arguably, this is a driver bug, as it should, if it
returns 0, fill iwp->length, even if it separately
indicated that the buffer contents was not valid.
However, we can also at least avoid the memory content
leak if the driver doesn't do this by setting the iwp
length to max_tokens, which then reflects how big the
buffer is that the driver may fill, regardless of how
big the userspace buffer is.
To illustrate the point, this patch also fixes a
corresponding cfg80211 bug (since this requirement
isn't documented nor was ever pointed out by anyone
during code review, I don't trust all drivers nor
all cfg80211 handlers to implement it correctly).
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
John W. Linville [Tue, 24 Aug 2010 19:27:34 +0000 (15:27 -0400)]
ath5k: check return value of ieee80211_get_tx_rate
commit
d8e1ba76d619dbc0be8fbeee4e6c683b5c812d3a upstream.
This avoids a NULL pointer dereference as reported here:
https://bugzilla.redhat.com/show_bug.cgi?id=625889
When the WARN condition is hit in ieee80211_get_tx_rate, it will return
NULL. So, we need to check the return value and avoid dereferencing it
in that case.
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Acked-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Christian Lamparter [Tue, 24 Aug 2010 20:54:05 +0000 (22:54 +0200)]
p54: fix tx feedback status flag check
commit
f880c2050f30b23c9b6f80028c09f76e693bf309 upstream.
Michael reported that p54* never really entered power
save mode, even tough it was enabled.
It turned out that upon a power save mode change the
firmware will set a special flag onto the last outgoing
frame tx status (which in this case is almost always the
designated PSM nullfunc frame). This flag confused the
driver; It erroneously reported transmission failures
to the stack, which then generated the next nullfunc.
and so on...
Reported-by: Michael Buesch <mb@bu3sch.de>
Tested-by: Michael Buesch <mb@bu3sch.de>
Signed-off-by: Christian Lamparter <chunkeey@googlemail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Frederic Weisbecker [Sun, 22 Aug 2010 02:29:17 +0000 (04:29 +0200)]
perf: Initialize callchains roots's childen hits
commit
5225c45899e872383ca39f5533d28ec63c54b39e upstream.
Each histogram entry has a callchain root that stores the
callchain samples. However we forgot to initialize the
tracking of children hits of these roots, which then got
random values on their creation.
The root children hits is multiplied by the minimum percentage
of hits provided by the user, and the result becomes the minimum
hits expected from children branches. If the random value due
to the uninitialization is big enough, then this minimum number
of hits can be huge and eventually filter every children branches.
The end result was invisible callchains. All we need to
fix this is to initialize the children hits of the root.
Reported-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
KAMEZAWA Hiroyuki [Thu, 9 Sep 2010 23:38:01 +0000 (16:38 -0700)]
memory hotplug: fix next block calculation in is_removable
commit
0dcc48c15f63ee86c2fcd33968b08d651f0360a5 upstream.
next_active_pageblock() is for finding next _used_ freeblock. It skips
several blocks when it finds there are a chunk of free pages lager than
pageblock. But it has 2 bugs.
1. We have no lock. page_order(page) - pageblock_order can be minus.
2. pageblocks_stride += is wrong. it should skip page_order(p) of pages.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dmitry Torokhov [Wed, 1 Sep 2010 00:27:02 +0000 (17:27 -0700)]
Input: i8042 - fix device removal on unload
commit
af045b86662f17bf130239a65995c61a34f00a6b upstream.
We need to call platform_device_unregister(i8042_platform_device)
before calling platform_driver_unregister() because i8042_remove()
resets i8042_platform_device to NULL. This leaves the platform device
instance behind and prevents driver reload.
Fixes https://bugzilla.kernel.org/show_bug.cgi?id=16613
Reported-by: Seryodkin Victor <vvscore@gmail.com>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Jan Sembera [Thu, 9 Sep 2010 23:37:54 +0000 (16:37 -0700)]
binfmt_misc: fix binfmt_misc priority
commit
ee3aebdd8f5f8eac41c25c80ceee3d728f920f3b upstream.
Commit
74641f584da ("alpha: binfmt_aout fix") (May 2009) introduced a
regression - binfmt_misc is now consulted after binfmt_elf, which will
unfortunately break ia32el. ia32 ELF binaries on ia64 used to be matched
using binfmt_misc and executed using wrapper. As 32bit binaries are now
matched by binfmt_elf before bindmt_misc kicks in, the wrapper is ignored.
The fix increases precedence of binfmt_misc to the original state.
Signed-off-by: Jan Sembera <jsembera@suse.cz>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Al Viro <viro@ZenIV.linux.org.uk>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Jerome Marchand [Thu, 9 Sep 2010 23:37:59 +0000 (16:37 -0700)]
kernel/groups.c: fix integer overflow in groups_search
commit
1c24de60e50fb19b94d94225458da17c720f0729 upstream.
gid_t is a unsigned int. If group_info contains a gid greater than
MAX_INT, groups_search() function may look on the wrong side of the search
tree.
This solves some unfair "permission denied" problems.
Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Gary King [Thu, 9 Sep 2010 23:38:05 +0000 (16:38 -0700)]
bounce: call flush_dcache_page() after bounce_copy_vec()
commit
ac8456d6f9a3011c824176bd6084d39e5f70a382 upstream.
I have been seeing problems on Tegra 2 (ARMv7 SMP) systems with HIGHMEM
enabled on 2.6.35 (plus some patches targetted at 2.6.36 to perform cache
maintenance lazily), and the root cause appears to be that the mm bouncing
code is calling flush_dcache_page before it copies the bounce buffer into
the bio.
The bounced page needs to be flushed after data is copied into it, to
ensure that architecture implementations can synchronize instruction and
data caches if necessary.
Signed-off-by: Gary King <gking@nvidia.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Acked-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Guennadi Liakhovetski [Thu, 9 Sep 2010 23:37:43 +0000 (16:37 -0700)]
mmc: fix the use of kunmap_atomic() in tmio_mmc.h
commit
5600efb1bc2745d93ae0bc08130117a84f2b9d69 upstream.
kunmap_atomic() takes the cookie, returned by the kmap_atomic() as its
argument and not the page address, used as an argument to kmap_atomic().
This patch fixes the compile error:
In file included from drivers/mmc/host/tmio_mmc.c:37:
drivers/mmc/host/tmio_mmc.h: In function 'tmio_mmc_kunmap_atomic':
drivers/mmc/host/tmio_mmc.h:192: error: negative width in bit-field '<anonymous>'
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Acked-by: Eric Miao <eric.y.miao@gmail.com>
Tested-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Yusuke Goda [Thu, 9 Sep 2010 23:37:39 +0000 (16:37 -0700)]
tmio_mmc: don't clear unhandled pending interrupts
commit
b78d6c5f51935ba89df8db33a57bacb547aa7325 upstream.
Previously, it was possible for ack_mmc_irqs() to clear pending interrupt
bits in the CTL_STATUS register, even though the interrupt handler had not
been called. This was because of a race that existed when doing a
read-modify-write sequence on CTL_STATUS. After the read step in this
sequence, if an interrupt occurred (causing one of the bits in CTL_STATUS
to be set) the write step would inadvertently clear it.
Observed with the TMIO_STAT_RXRDY bit together with CMD53 on AR6002 and
BCM4318 SDIO cards in polled mode.
This patch eliminates this race by only writing to CTL_STATUS and clearing
the interrupts that were passed as an argument to ack_mmc_irqs()."
[matt@console-pimps.org: rewrote changelog]
Signed-off-by: Yusuke Goda <yusuke.goda.sx@renesas.com>
Acked-by: Magnus Damm <damm@opensource.se>
Tested-by: Arnd Hannemann <arnd@arndnet.de>
Acked-by: Ian Molton <ian@mnementh.co.uk>
Cc: Matt Fleming <matt@console-pimps.org>
Cc: Samuel Ortiz <sameo@linux.intel.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: <linux-mmc@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter Oberparleiter [Thu, 9 Sep 2010 23:37:35 +0000 (16:37 -0700)]
gcov: fix null-pointer dereference for certain module types
commit
85a0fdfd0f967507f3903e8419bc7e408f5a59de upstream.
The gcov-kernel infrastructure expects that each object file is loaded
only once. This may not be true, e.g. when loading multiple kernel
modules which are linked to the same object file. As a result, loading
such kernel modules will result in incorrect gcov results while unloading
will cause a null-pointer dereference.
This patch fixes these problems by changing the gcov-kernel infrastructure
so that multiple profiling data sets can be associated with one debugfs
entry. It applies to 2.6.36-rc1.
Signed-off-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Reported-by: Werner Spies <werner.spies@thalesgroup.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dan Carpenter [Sat, 4 Sep 2010 03:14:35 +0000 (03:14 +0000)]
irda: off by one
commit
cf9b94f88bdbe8a02015fc30d7c232b2d262d4ad upstream.
This is an off by one. We would go past the end when we NUL terminate
the "value" string at end of the function. The "value" buffer is
allocated in irlan_client_parse_response() or
irlan_provider_parse_command().
CC: stable@kernel.org
Signed-off-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Chris Wright [Thu, 9 Sep 2010 23:34:59 +0000 (16:34 -0700)]
tracing: t_start: reset FTRACE_ITER_HASH in case of seek/pread
commit
df09162550fbb53354f0c88e85b5d0e6129ee9cc upstream.
Be sure to avoid entering t_show() with FTRACE_ITER_HASH set without
having properly started the iterator to iterate the hash. This case is
degenerate and, as discovered by Robert Swiecki, can cause t_hash_show()
to misuse a pointer. This causes a NULL ptr deref with possible security
implications. Tracked as CVE-2010-3079.
Cc: Robert Swiecki <swiecki@google.com>
Cc: Eugene Teo <eugene@redhat.com>
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Steven Rostedt [Wed, 8 Sep 2010 15:20:37 +0000 (11:20 -0400)]
tracing: Do not allow llseek to set_ftrace_filter
commit
9c55cb12c1c172e2d51e85fbb5a4796ca86b77e7 upstream.
Reading the file set_ftrace_filter does three things.
1) shows whether or not filters are set for the function tracer
2) shows what functions are set for the function tracer
3) shows what triggers are set on any functions
3 is independent from 1 and 2.
The way this file currently works is that it is a state machine,
and as you read it, it may change state. But this assumption breaks
when you use lseek() on the file. The state machine gets out of sync
and the t_show() may use the wrong pointer and cause a kernel oops.
Luckily, this will only kill the app that does the lseek, but the app
dies while holding a mutex. This prevents anyone else from using the
set_ftrace_filter file (or any other function tracing file for that matter).
A real fix for this is to rewrite the code, but that is too much for
a -rc release or stable. This patch simply disables llseek on the
set_ftrace_filter() file for now, and we can do the proper fix for the
next major release.
Reported-by: Robert Swiecki <swiecki@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
Cc: Tavis Ormandy <taviso@google.com>
Cc: Eugene Teo <eugene@redhat.com>
Cc: vendor-sec@lst.de
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Li Zefan [Mon, 23 Aug 2010 08:50:12 +0000 (16:50 +0800)]
tracing: Fix a race in function profile
commit
3aaba20f26f58843e8f20611e5c0b1c06954310f upstream.
While we are reading trace_stat/functionX and someone just
disabled function_profile at that time, we can trigger this:
divide error: 0000 [#1] PREEMPT SMP
...
EIP is at function_stat_show+0x90/0x230
...
This fix just takes the ftrace_profile_lock and checks if
rec->counter is 0. If it's 0, we know the profile buffer
has been reset.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
LKML-Reference: <
4C723644.
4040708@cn.fujitsu.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Tejun Heo [Tue, 7 Sep 2010 12:05:31 +0000 (14:05 +0200)]
libata: skip EH autopsy and recovery during suspend
commit
e2f3d75fc0e4a0d03c61872bad39ffa2e74a04ff upstream.
For some mysterious reason, certain hardware reacts badly to usual EH
actions while the system is going for suspend. As the devices won't
be needed until the system is resumed, ask EH to skip usual autopsy
and recovery and proceed directly to suspend.
Signed-off-by: Tejun Heo <tj@kernel.org>
Tested-by: Stephan Diestelhorst <stephan.diestelhorst@amd.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Alan Stern [Fri, 7 May 2010 14:41:10 +0000 (10:41 -0400)]
HID: fix suspend crash by moving initializations earlier
commit
fde4e2f73208b8f34f123791e39c0cb6bc74b32a upstream.
Although the usbhid driver allocates its usbhid structure in the probe
routine, several critical fields in that structure don't get
initialized until usbhid_start(). However if report descriptor
parsing fails then usbhid_start() is never called. This leads to
problems during system suspend -- the system will freeze.
This patch (as1378) fixes the bug by moving the initialization
statements up into usbhid_probe().
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Reported-by: Bruno Prémont <bonbons@linux-vserver.org>
Tested-By: Bruno Prémont <bonbons@linux-vserver.org>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Cc: maximilian attems <max@stro.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Jiri Kosina [Wed, 17 Feb 2010 13:25:01 +0000 (14:25 +0100)]
HID: usbhid: initialize interface pointers early enough
commit
57ab12e418ec4fe24c11788bb1bbdabb29d05679 upstream.
Move the initialization of USB interface pointers from _start()
over to _probe() callback, which is where it belongs.
This fixes case where interface is NULL when parsing of report
descriptor fails.
LKML-Reference: <
20100213135720.
603e5f64@neptune.home>
Reported-by: Alan Stern <stern@rowland.harvard.edu>
Tested-by: Bruno Prémont <bonbons@linux-vserver.org>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Cc: maximilian attems <max@stro.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Robert Richter [Wed, 1 Sep 2010 12:50:50 +0000 (14:50 +0200)]
oprofile, x86: fix init_sysfs() function stub
commit
269f45c25028c75fe10e6d9be86e7202ab461fbc upstream.
The use of the return value of init_sysfs() with commit
10f0412 oprofile, x86: fix init_sysfs error handling
discovered the following build error for !CONFIG_PM:
.../linux/arch/x86/oprofile/nmi_int.c: In function ‘op_nmi_init’:
.../linux/arch/x86/oprofile/nmi_int.c:784: error: expected expression before ‘do’
make[2]: *** [arch/x86/oprofile/nmi_int.o] Error 1
make[1]: *** [arch/x86/oprofile] Error 2
This patch fixes this.
Reported-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Robert Richter [Mon, 30 Aug 2010 08:56:18 +0000 (10:56 +0200)]
oprofile, x86: fix init_sysfs error handling
commit
10f0412f57f2a76a90eff4376f59cbb0a39e4e18 upstream.
On failure init_sysfs() might not properly free resources. The error
code of the function is not checked. And, when reinitializing the exit
function might be called twice. This patch fixes all this.
Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Robert Richter [Fri, 13 Aug 2010 14:29:04 +0000 (16:29 +0200)]
oprofile: fix crash when accessing freed task structs
commit
750d857c682f4db60d14722d430c7ccc35070962 upstream.
This patch fixes a crash during shutdown reported below. The crash is
caused by accessing already freed task structs. The fix changes the
order for registering and unregistering notifier callbacks.
All notifiers must be initialized before buffers start working. To
stop buffer synchronization we cancel all workqueues, unregister the
notifier callback and then flush all buffers. After all of this we
finally can free all tasks listed.
This should avoid accessing freed tasks.
On 22.07.10 01:14:40, Benjamin Herrenschmidt wrote:
> So the initial observation is a spinlock bad magic followed by a crash
> in the spinlock debug code:
>
> [ 1541.586531] BUG: spinlock bad magic on CPU#5, events/5/136
> [ 1541.597564] Unable to handle kernel paging request for data at address 0x6b6b6b6b6b6b6d03
>
> Backtrace looks like:
>
> spin_bug+0x74/0xd4
> ._raw_spin_lock+0x48/0x184
> ._spin_lock+0x10/0x24
> .get_task_mm+0x28/0x8c
> .sync_buffer+0x1b4/0x598
> .wq_sync_buffer+0xa0/0xdc
> .worker_thread+0x1d8/0x2a8
> .kthread+0xa8/0xb4
> .kernel_thread+0x54/0x70
>
> So we are accessing a freed task struct in the work queue when
> processing the samples.
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Ben Hutchings [Thu, 9 Sep 2010 04:21:16 +0000 (05:21 +0100)]
tun: Don't add sysfs attributes to devices without sysfs directories
This applies to 2.6.32 *only*. It has not been applied upstream since
the limitation no longer exists.
Prior to Linux 2.6.35, net devices outside the initial net namespace
did not have sysfs directories. Attempting to add attributes to
them will trigger a BUG().
Reported-and-tested-by: Russell Stuart <russell-debian@stuart.id.au>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dan Carpenter [Wed, 25 Aug 2010 07:12:29 +0000 (09:12 +0200)]
sysfs: checking for NULL instead of ERR_PTR
commit
57f9bdac2510cd7fda58e4a111d250861eb1ebeb upstream.
d_path() returns an ERR_PTR and it doesn't return NULL.
Signed-off-by: Dan Carpenter <error27@gmail.com>
Reviewed-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Takashi Iwai [Mon, 6 Sep 2010 07:13:45 +0000 (09:13 +0200)]
ALSA: seq/oss - Fix double-free at error path of snd_seq_oss_open()
commit
27f7ad53829f79e799a253285318bff79ece15bd upstream.
The error handling in snd_seq_oss_open() has several bad codes that
do dereferecing released pointers and double-free of kmalloc'ed data.
The object dp is release in free_devinfo() that is called via
private_free callback. The rest shouldn't touch this object any more.
The patch changes delete_port() to call kfree() in any case, and gets
rid of unnecessary calls of destructors in snd_seq_oss_open().
Fixes CVE-2010-3080.
Reported-and-tested-by: Tavis Ormandy <taviso@cmpxchg8b.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Kailang Yang [Fri, 9 Apr 2010 08:57:33 +0000 (10:57 +0200)]
ALSA: hda - Fix auto-parser of ALC269vb for HP pin NID 0x21
commit
531d8791accf1464bc6854ff69d08dd866189d17 upstream.
ALC269vb has an alternative HP pin 0x21 in addition.
Fix the parser to recognize it.
Signed-off-by: Kailang Yang <kailang@realtek.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Cc: David Henningsson <david.henningsson@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Toby Gray [Thu, 2 Sep 2010 09:46:20 +0000 (10:46 +0100)]
USB: cdc-acm: Fixing crash when ACM probing interfaces with no endpoint descriptors.
commit
577045c0a76e34294f902a7d5d60e90b04d094d0 upstream.
Certain USB devices, such as the Nokia X6 mobile phone, don't expose any
endpoint descriptors on some of their interfaces. If the ACM driver is forced
to probe all interfaces on a device the a NULL pointer dereference will occur
when the ACM driver attempts to use the endpoint of the alternative settings.
One way to get the ACM driver to probe all the interfaces is by using the
/sys/bus/usb/drivers/cdc_acm/new_id interface.
This patch checks that the endpoint pointer for the current alternate settings
is non-NULL before using it.
Signed-off-by: Toby Gray <toby.gray@realvnc.com>
Cc: Oliver Neukum <oliver@neukum.name>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Philippe Corbes [Tue, 31 Aug 2010 17:31:32 +0000 (19:31 +0200)]
USB: cdc-acm: Add pseudo modem without AT command capabilities
commit
5b239f0aebd4dd6f85b13decf5e18e86e35d57f0 upstream.
cdc-acm.c : Manage pseudo-modem without AT commands capabilities
Enable to drive electronic simple gadgets based on microcontrolers.
The Interface descriptor is like this:
bInterfaceClass 2 Communications
bInterfaceSubClass 2 Abstract (modem)
bInterfaceProtocol 0 None
Signed-off-by: Philippe Corbes <philippe.corbes@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Toby Gray [Wed, 1 Sep 2010 15:01:19 +0000 (16:01 +0100)]
USB: cdc-acm: Adding second ACM channel support for various Nokia and one Samsung phones
commit
4035e45632c2a8bb4edae83c20447051bd9a9604 upstream.
S60 phones from Nokia and Samsung expose two ACM channels. The first is a modem
with a standard AT-command interface, which is picked up correctly by CDC-ACM.
The second ACM port is marked as having a vendor-specific protocol. This means
that the ACM driver will not claim the second channel by default.
This adds support for the second ACM channel for the following devices:
Nokia E63
Nokia E75
Nokia 6760 Slide
Nokia E52
Nokia E55
Nokia E72
Nokia X6
Nokia N97 Mini
Nokia 5800 Xpressmusic
Nokia E90
Samsung GTi8510 (INNOV8)
Signed-off-by: Toby Gray <toby.gray@realvnc.com>
Cc: Oliver Neukum <oliver@neukum.name>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Przemo Firszt [Mon, 28 Jun 2010 20:29:34 +0000 (21:29 +0100)]
USB: Expose vendor-specific ACM channel on Nokia 5230
commit
83a4eae9aeed4a69e89e323a105e653ae06e7c1f upstream.
Nokia S60 phones expose two ACM channels. The first is
a modem, the second is 'vendor-specific' but is treated
as a serial device at the S60 end, so we want to expose
it on Linux too.
Signed-off-by: Przemo Firszt <przemo@firszt.eu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Russ Nelson [Thu, 22 Apr 2010 03:07:03 +0000 (23:07 -0400)]
USB: cdc-acm: add another device quirk
commit
c3baa19b0a9b711b02cec81d9fea33b7b9628957 upstream.
The Maretron USB100 needs this quirk in order to work properly.
Signed-off-by: Russ Nelson <nelson@crynwr.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Adrian Taylor [Thu, 19 Nov 2009 10:35:33 +0000 (10:35 +0000)]
USB: Exposing second ACM channel as tty for Nokia S60 phones.
commit
c1479a92cf0a7792298d364e44a781550621cb58 upstream.
Nokia S60 phones expose two ACM channels. The first is a modem and is picked
up by the standard AT-command interface information in the CDC-ACM driver. The
second is marked as having a vendor-specific protocol. Normally, we don't
expose those as ttys. (On some other devices, they may be claimed by the
rndis_host driver and used as a network interface).
But on S60 this second ACM channel is the way that third-party S60 application
developers are expected to communicate over USB. It acts as a serial device
at the S60 end, and so it should on Linux too.
The list of devices is largely derived from:
http://wiki.forum.nokia.com/index.php/S60_Platform_and_device_identification_codes
http://wiki.forum.nokia.com/index.php/Nokia_USB_Product_IDs
and includes only the S60 3rd Edition+ devices documented there.
There are many devices for which the USB device ID is not documented,
including:
Nokia 6290
Nokia E63
Nokia 5630 XpressMusic
Nokia 5730 XpressMusic
Nokia 6710 Navigator
Nokia 6720 classic
Nokia 6730 Classic
Nokia 6760 slide
Nokia 6790 slide
Nokia 6790 Surge
Nokia E52
Nokia E55
Nokia E71x (AT&T)
Nokia E72
Nokia E75
Nokia E75 US+LTA variant
Nokia N79
Nokia N86 8MP
Nokia 5230 (RM-588)
Nokia 5230 (RM-594)
Nokia 5530 XpressMusic
Nokia 5530 XpressMusic (china)
Nokia 5800 XM
Nokia N97 (RM-506)
Nokia N97 mini
Nokia X6
It would be good to add those subsequently.
Signed-off-by: Adrian Taylor <aat@realvnc.com>
Acked-by: Oliver Neukum <oliver@neukum.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dave Ludlow [Wed, 1 Sep 2010 16:33:30 +0000 (12:33 -0400)]
usb: serial: mos7840: Add USB IDs to support more B&B USB/RS485 converters.
commit
870408c8291015872a7a0b583673a9e56b3e73f4 upstream.
Add the USB IDs needed to support the B&B USOPTL4-4P, USO9ML2-2P, and
USO9ML2-4P. This patch expands and corrects a typo in the patch sent
on 08-31-2010.
Signed-off-by: Dave Ludlow <dave.ludlow@bay.ws>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Johan Hovold [Mon, 28 Dec 2009 22:01:55 +0000 (23:01 +0100)]
USB: mos7840: fix DMA buffers on stack and endianess bugs
commit
9e221a35f82cbef0397d81fed588bafba95b550c upstream.
Signed-off-by: Johan Hovold <jhovold@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dave Ludlow [Tue, 31 Aug 2010 18:26:17 +0000 (14:26 -0400)]
usb: serial: mos7840: Add USB ID to support the B&B Electronics USOPTL4-2P.
commit
caf3a636a9f809fdca5fa746e6687096457accb1 upstream.
Add the USB ID needed to support B&B Electronic's 2-port, optically-isolated,
powered, USB to RS485 converter.
Signed-off-by: Dave Ludlow <dave.ludlow@bay.ws>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Blaise Gassend [Fri, 18 Dec 2009 23:23:38 +0000 (15:23 -0800)]
USB: serial: Extra device/vendor ID for mos7840 driver
commit
27f1281d5f72e4f161e215ccad3d7d86b9e624a9 upstream.
Signed-off-by: Blaise Gassend <blaise.gasend_linux@m4x.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Luke Lowrey [Thu, 2 Sep 2010 10:39:49 +0000 (11:39 +0100)]
USB: ftdi_sio: Added custom PIDs for ChamSys products
commit
657373883417b2618023fd4135d251ba06a2c30a upstream.
Added the 0xDAF8 to 0xDAFF PID range for ChamSys limited USB interface/wing products
Signed-off-by: Luke Lowrey <luke@chamsys.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Jason Detring [Thu, 26 Aug 2010 20:08:54 +0000 (15:08 -0500)]
USB: cp210x: Add B&G H3000 link cable ID
commit
0bf7a81c5d447c21db434be35363c44c0a30f598 upstream.
This is the cable between an H3000 navigation unit and a multi-function display.
http://www.bandg.com/en/Products/H3000/Spares-and-Accessories/Cables/H3000-CPU-USB-Cable-Pack/
Signed-off-by: Jason Detring <jason.detring@navico.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Craig Shelley [Mon, 23 Aug 2010 19:50:57 +0000 (20:50 +0100)]
USB: CP210x Add new device ID
commit
541e05ec3add5ab5bcf238d60161b53480280b20 upstream.
New device ID added for Balluff RFID reader.
Signed-off-by: Craig Shelley <craig@microtron.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Maxim Osipov [Sat, 21 Aug 2010 10:54:06 +0000 (14:54 +0400)]
USB: Fix kernel oops with g_ether and Windows
commit
037d3656adbd7e8cb848f01cf5dec423ed76bbe7 upstream.
Please find attached patch for
https://bugzilla.kernel.org/show_bug.cgi?id=16023 problem.
Signed-off-by: Maxim Osipov <maxim.osipov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dan Carpenter [Sat, 14 Aug 2010 09:06:19 +0000 (11:06 +0200)]
USB: ehci-ppc-of: problems in unwind
commit
08a3b3b1c2e622e378d9086aee9e2e42ce37591d upstream.
The iounmap(ehci->ohci_hcctrl_reg); should be the first thing we do
because the ioremap() was the last thing we did. Also if we hit any of
the goto statements in the original code then it would have led to a
NULL dereference of "ehci". This bug was introduced in:
796bcae7361c
"USB: powerpc: Workaround for the PPC440EPX USBH_23 errata [take 3]"
I modified the few lines in front a little so that my code didn't
obscure the return success code path.
Signed-off-by: Dan Carpenter <error27@gmail.com>
Reviewed-by: Grant Likely <grant.likely@secretlab.ca>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Sunil Mushran [Thu, 12 Aug 2010 23:24:26 +0000 (16:24 -0700)]
ocfs2: Fix incorrect checksum validation error
commit
f5ce5a08a40f2086435858ddc80cb40394b082eb upstream.
For local mounts, ocfs2_read_locked_inode() calls ocfs2_read_blocks_sync() to
read the inode off the disk. The latter first checks to see if that block is
cached in the journal, and, if so, returns that block. That is ok.
But ocfs2_read_locked_inode() goes wrong when it tries to validate the checksum
of such blocks. Blocks that are cached in the journal may not have had their
checksum computed as yet. We should not validate the checksums of such blocks.
Fixes ossbz#1282
http://oss.oracle.com/bugzilla/show_bug.cgi?id=1282
Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
Singed-off-by: Tao Ma <tao.ma@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Luis R. Rodriguez [Mon, 30 Aug 2010 23:26:33 +0000 (19:26 -0400)]
ath9k_hw: fix parsing of HT40 5 GHz CTLs
commit
904879748d7439a6dabdc6be9aad983e216b027d upstream.
The 5 GHz CTL indexes were not being read for all hardware
devices due to the masking out through the CTL_MODE_M mask
being one bit too short. Without this the calibrated regulatory
maximum values were not being picked up when devices operate
on 5 GHz in HT40 mode. The final output power used for Atheros
devices is the minimum between the calibrated CTL values and
what CRDA provides.
Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Miklos Szeredi [Tue, 7 Sep 2010 11:42:41 +0000 (13:42 +0200)]
fuse: flush background queue on connection close
commit
595afaf9e6ee1b48e13ec4b8bcc8c7dee888161a upstream.
David Bartly reported that fuse can hang in fuse_get_req_nofail() when
the connection to the filesystem server is no longer active.
If bg_queue is not empty then flush_bg_queue() called from
request_end() can put more requests on to the pending queue. If this
happens while ending requests on the processing queue then those
background requests will be queued to the pending list and never
ended.
Another problem is that fuse_dev_release() didn't wake up processes
sleeping on blocked_waitq.
Solve this by:
a) flushing the background queue before calling end_requests() on the
pending and processing queues
b) setting blocked = 0 and waking up processes waiting on
blocked_waitq()
Thanks to David for an excellent bug report.
Reported-by: David Bartley <andareed@gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Hank Janssen [Wed, 1 Sep 2010 18:10:41 +0000 (11:10 -0700)]
staging: hv: Fixed lockup problem with bounce_buffer scatter list
commit
77c5ceaff31645ea049c6706b99e699eae81fb88 upstream.
Fixed lockup problem with bounce_buffer scatter list which caused
crashes in heavy loads. And minor code indentation cleanup in effected
area.
Removed whitespace and noted minor indentation changes in description as
pointed out by Joe Perches. (Thanks for reviewing Joe)
Signed-off-by: Hank Janssen <hjanssen@microsoft.com>
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Hank Janssen [Thu, 5 Aug 2010 19:30:31 +0000 (19:30 +0000)]
staging: hv: Increased storvsc ringbuffer and max_io_requests
commit
15dd1c9f53b31cdc84b8072a88c23fa09527c596 upstream.
Increased storvsc ringbuffer and max_io_requests. This now more
closely mimics the numbers on Hyper-V. And will allow more IO requests
to take place for the SCSI driver.
Max_IO is set to double from what it was before, Hyper-V allows it and
we have had appliance builder requests to see if it was a problem to
increase the number.
Ringbuffer size for storvsc is now increased because I have seen A few buffer
problems on extremely busy systems. They were Set pretty low before.
And since max_io_requests is increased I Really needed to increase the buffer
as well.
Signed-off-by: Hank Janssen <hjanssen@microsoft.com>
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Haiyang Zhang [Thu, 5 Aug 2010 19:30:01 +0000 (19:30 +0000)]
staging: hv: Fixed the value of the 64bit-hole inside ring buffer
commit
e5fa721d1c2a54261a37eb59686e18dee34b6af6 upstream.
Fixed the value of the 64bit-hole inside ring buffer, this
caused a problem on Hyper-V when running checked Windows builds.
Checked builds of Windows are used internally and given to external
system integrators at times. They are builds that for example that all
elements in a structure follow the definition of that Structure. The bug
this fixed was for a field that we did not fill in at all (Because we do
Not use it on the Linux side), and the checked build of windows gives
errors on it internally to the Windows logs.
This fixes that error.
Signed-off-by: Hank Janssen <hjanssen@microsoft.com>
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Hank Janssen [Thu, 5 Aug 2010 19:29:44 +0000 (19:29 +0000)]
staging: hv: Fixed bounce kmap problem by using correct index
commit
0c47a70a9a8a6d1ec37a53d2f9cb82f8b8ef8aa2 upstream.
Fixed bounce offset kmap problem by using correct index.
The symptom of the problem is that in some NAS appliances this problem
represents Itself by a unresponsive VM under a load with many clients writing
small files.
Signed-off-by:Hank Janssen <hjanssen@microsoft.com>
Signed-off-by:Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>