firefly-linux-kernel-4.4.55.git
11 years agovti: remove duplicated code to fix a memory leak
Cong Wang [Sat, 29 Jun 2013 05:00:57 +0000 (13:00 +0800)]
vti: remove duplicated code to fix a memory leak

[ Upstream commit ab6c7a0a43c2eaafa57583822b619b22637b49c7 ]

vti module allocates dev->tstats twice: in vti_fb_tunnel_init()
and in vti_tunnel_init(), this lead to a memory leak of
dev->tstats.

Just remove the duplicated operations in vti_fb_tunnel_init().

(candidate for -stable)

Signed-off-by: Cong Wang <amwang@redhat.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: Saurabh Mohan <saurabh.mohan@vyatta.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agogre: fix a regression in ioctl
Cong Wang [Sat, 29 Jun 2013 04:02:59 +0000 (12:02 +0800)]
gre: fix a regression in ioctl

[ Upstream commit 6c734fb8592f6768170e48e7102cb2f0a1bb9759 ]

When testing GRE tunnel, I got:

 # ip tunnel show
 get tunnel gre0 failed: Invalid argument
 get tunnel gre1 failed: Invalid argument

This is a regression introduced by commit c54419321455631079c7d
("GRE: Refactor GRE tunneling code.") because previously we
only check the parameters for SIOCADDTUNNEL and SIOCCHGTUNNEL,
after that commit, the check is moved for all commands.

So, just check for SIOCADDTUNNEL and SIOCCHGTUNNEL.

After this patch I got:

 # ip tunnel show
 gre0: gre/ip  remote any  local any  ttl inherit  nopmtudisc
 gre1: gre/ip  remote 192.168.122.101  local 192.168.122.45  ttl inherit

Signed-off-by: Cong Wang <amwang@redhat.com>
Cc: Pravin B Shelar <pshelar@nicira.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agonet: Swap ver and type in pppoe_hdr
Changli Gao [Fri, 28 Jun 2013 16:15:51 +0000 (00:15 +0800)]
net: Swap ver and type in pppoe_hdr

[ Upstream commit b1a5a34bd0b8767ea689e68f8ea513e9710b671e ]

Ver and type in pppoe_hdr should be swapped as defined by RFC2516
section-4.

Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agox25: Fix broken locking in ioctl error paths.
Dave Jones [Fri, 28 Jun 2013 16:13:52 +0000 (12:13 -0400)]
x25: Fix broken locking in ioctl error paths.

[ Upstream commit 4ccb93ce7439b63c31bc7597bfffd13567fa483d ]

Two of the x25 ioctl cases have error paths that break out of the function without
unlocking the socket, leading to this warning:

================================================
[ BUG: lock held when returning to user space! ]
3.10.0-rc7+ #36 Not tainted
------------------------------------------------
trinity-child2/31407 is leaving the kernel with locks still held!
1 lock held by trinity-child2/31407:
 #0:  (sk_lock-AF_X25){+.+.+.}, at: [<ffffffffa024b6da>] x25_ioctl+0x8a/0x740 [x25]

Signed-off-by: Dave Jones <davej@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoneighbour: fix a race in neigh_destroy()
Eric Dumazet [Fri, 28 Jun 2013 09:37:42 +0000 (02:37 -0700)]
neighbour: fix a race in neigh_destroy()

[ Upstream commit c9ab4d85de222f3390c67aedc9c18a50e767531e ]

There is a race in neighbour code, because neigh_destroy() uses
skb_queue_purge(&neigh->arp_queue) without holding neighbour lock,
while other parts of the code assume neighbour rwlock is what
protects arp_queue

Convert all skb_queue_purge() calls to the __skb_queue_purge() variant

Use __skb_queue_head_init() instead of skb_queue_head_init()
to make clear we do not use arp_queue.lock

And hold neigh->lock in neigh_destroy() to close the race.

Reported-by: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoipv6: only apply anti-spoofing checks to not-pointopoint tunnels
Hannes Frederic Sowa [Thu, 27 Jun 2013 20:46:04 +0000 (22:46 +0200)]
ipv6: only apply anti-spoofing checks to not-pointopoint tunnels

[ Upstream commit 5c29fb12e8fb8a8105ea048cb160fd79a85a52bb ]

Because of commit 218774dc341f219bfcf940304a081b121a0e8099 ("ipv6: add
anti-spoofing checks for 6to4 and 6rd") the sit driver dropped packets
for 2002::/16 destinations and sources even when configured to work as a
tunnel with fixed endpoint. We may only apply the 6rd/6to4 anti-spoofing
checks if the device is not in pointopoint mode.

This was an oversight from me in the above commit, sorry.  Thanks to
Roman Mamedov for reporting this!

Reported-by: Roman Mamedov <rm@romanrm.ru>
Cc: David Miller <davem@davemloft.net>
Cc: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agosparc32: vm_area_struct access for old Sun SPARCs.
Olivier DANET [Wed, 10 Jul 2013 20:56:10 +0000 (13:56 -0700)]
sparc32: vm_area_struct access for old Sun SPARCs.

upstream commit 961246b4ed8da3bcf4ee1eb9147f341013553e3c.

Commit e4c6bfd2d79d063017ab19a18915f0bc759f32d9 ("mm: rearrange
vm_area_struct for fewer cache misses") changed the layout of the
vm_area_struct structure, it broke several SPARC32 assembly routines
which used numerical constants for accessing the vm_mm field.

This patch defines the VMA_VM_MM constant to replace the immediate values.

Signed-off-by: Olivier DANET <odanet@caramail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agowriteback: Fix periodic writeback after fs mount
Jan Kara [Fri, 28 Jun 2013 14:04:02 +0000 (16:04 +0200)]
writeback: Fix periodic writeback after fs mount

commit a5faeaf9109578e65e1a32e2a3e76c8b47e7dcb6 upstream.

Code in blkdev.c moves a device inode to default_backing_dev_info when
the last reference to the device is put and moves the device inode back
to its bdi when the first reference is acquired. This includes moving to
wb.b_dirty list if the device inode is dirty. The code however doesn't
setup timer to wake corresponding flusher thread and while wb.b_dirty
list is non-empty __mark_inode_dirty() will not set it up either. Thus
periodic writeback is effectively disabled until a sync(2) call which can
lead to unexpected data loss in case of crash or power failure.

Fix the problem by setting up a timer for periodic writeback in case we
add the first dirty inode to wb.b_dirty list in bdev_inode_switch_bdi().

Reported-by: Bert De Jonghe <Bert.DeJonghe@amplidata.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoLinux 3.10.3
Greg Kroah-Hartman [Thu, 25 Jul 2013 22:16:45 +0000 (15:16 -0700)]
Linux 3.10.3

11 years agotracing: Add trace_array_get/put() to event handling
Steven Rostedt (Red Hat) [Tue, 2 Jul 2013 19:30:53 +0000 (15:30 -0400)]
tracing: Add trace_array_get/put() to event handling

commit 8e2e2fa47129532a30cff6c25a47078dc97d9260 upstream.

Commit a695cb58162 "tracing: Prevent deleting instances when they are being read"
tried to fix a race between deleting a trace instance and reading contents
of a trace file. But it wasn't good enough. The following could crash the kernel:

 # cd /sys/kernel/debug/tracing/instances
 # ( while :; do mkdir foo; rmdir foo; done ) &
 # ( while :; do echo 1 > foo/events/sched/sched_switch 2> /dev/null; done ) &

Luckily this can only be done by root user, but it should be fixed regardless.

The problem is that a delete of the file can happen after the write to the event
is opened, but before the enabling happens.

The solution is to make sure the trace_array is available before succeeding in
opening for write, and incerment the ref counter while opened.

Now the instance can be deleted when the events are writing to the buffer,
but the deletion of the instance will disable all events before the instance
is actually deleted.

Reported-by: Alexander Lam <azl@google.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agotracing: Fix race between deleting buffer and setting events
Steven Rostedt (Red Hat) [Tue, 2 Jul 2013 18:48:23 +0000 (14:48 -0400)]
tracing: Fix race between deleting buffer and setting events

commit 2a6c24afab70dbcfee49f4c76e1511eec1a3298b upstream.

While analyzing the code, I discovered that there's a potential race between
deleting a trace instance and setting events. There are a few races that can
occur if events are being traced as the buffer is being deleted. Mostly the
problem comes with freeing the descriptor used by the trace event callback.
To prevent problems like this, the events are disabled before the buffer is
deleted. The problem with the current solution is that the event_mutex is let
go between disabling the events and freeing the files, which means that the events
could be enabled again while the freeing takes place.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agotracing: Get trace_array ref counts when accessing trace files
Steven Rostedt (Red Hat) [Tue, 2 Jul 2013 03:34:22 +0000 (23:34 -0400)]
tracing: Get trace_array ref counts when accessing trace files

commit 7b85af63034818e43aee6c1d7bf1c7c6796a9073 upstream.

When a trace file is opened that may access a trace array, it must
increment its ref count to prevent it from being deleted.

Reported-by: Alexander Lam <azl@google.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agotracing: Add trace_array_get/put() to handle instance refs better
Steven Rostedt (Red Hat) [Tue, 2 Jul 2013 02:50:29 +0000 (22:50 -0400)]
tracing: Add trace_array_get/put() to handle instance refs better

commit ff451961a8b2a17667a7bfa39c86fb9b351445db upstream.

Commit a695cb58162 "tracing: Prevent deleting instances when they are being read"
tried to fix a race between deleting a trace instance and reading contents
of a trace file. But it wasn't good enough. The following could crash the kernel:

 # cd /sys/kernel/debug/tracing/instances
 # ( while :; do mkdir foo; rmdir foo; done ) &
 # ( while :; do cat foo/trace &> /dev/null; done ) &

Luckily this can only be done by root user, but it should be fixed regardless.

The problem is that a delete of the file can happen after the reader starts
to open the file but before it grabs the trace_types_mutex.

The solution is to validate the trace array before using it. If the trace
array does not exist in the list of trace arrays, then it returns -ENODEV.

There's a possibility that a trace_array could be deleted and a new one
created and the open would open its file instead. But that is very minor as
it will just return the data of the new trace array, it may confuse the user
but it will not crash the system. As this can only be done by root anyway,
the race will only occur if root is deleting what its trying to read at
the same time.

Reported-by: Alexander Lam <azl@google.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agotracing: Protect ftrace_trace_arrays list in trace_events.c
Alexander Z Lam [Tue, 2 Jul 2013 02:37:54 +0000 (19:37 -0700)]
tracing: Protect ftrace_trace_arrays list in trace_events.c

commit a82274151af2b075163e3c42c828529dee311487 upstream.

There are multiple places where the ftrace_trace_arrays list is accessed in
trace_events.c without the trace_types_lock held.

Link: http://lkml.kernel.org/r/1372732674-22726-1-git-send-email-azl@google.com
Signed-off-by: Alexander Z Lam <azl@google.com>
Cc: Vaibhav Nagarnaik <vnagarnaik@google.com>
Cc: David Sharp <dhsharp@google.com>
Cc: Alexander Z Lam <lambchop468@gmail.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agotracing: Make trace_marker use the correct per-instance buffer
Alexander Z Lam [Mon, 1 Jul 2013 22:31:24 +0000 (15:31 -0700)]
tracing: Make trace_marker use the correct per-instance buffer

commit 2d71619c59fac95a5415a326162fa046161b938c upstream.

The trace_marker file was present for each new instance created, but it
added the trace mark to the global trace buffer instead of to
the instance's buffer.

Link: http://lkml.kernel.org/r/1372717885-4543-2-git-send-email-azl@google.com
Signed-off-by: Alexander Z Lam <azl@google.com>
Cc: David Sharp <dhsharp@google.com>
Cc: Vaibhav Nagarnaik <vnagarnaik@google.com>
Cc: Alexander Z Lam <lambchop468@gmail.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agotracing: Fix irqs-off tag display in syscall tracing
zhangwei(Jovi) [Wed, 10 Apr 2013 03:26:23 +0000 (11:26 +0800)]
tracing: Fix irqs-off tag display in syscall tracing

commit 11034ae9c20f4057a6127fc965906417978e69b2 upstream.

All syscall tracing irqs-off tags are wrong, the syscall enter entry doesn't
disable irqs.

 [root@jovi tracing]#echo "syscalls:sys_enter_open" > set_event
 [root@jovi tracing]# cat trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 13/13   #P:2
 #
 #                              _-----=> irqs-off
 #                             / _----=> need-resched
 #                            | / _---=> hardirq/softirq
 #                            || / _--=> preempt-depth
 #                            ||| /     delay
 #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
 #              | |       |   ||||       |         |
       irqbalance-513   [000] d... 56115.496766: sys_open(filename: 804e1a6, flags: 0, mode: 1b6)
       irqbalance-513   [000] d... 56115.497008: sys_open(filename: 804e1bb, flags: 0, mode: 1b6)
         sendmail-771   [000] d... 56115.827982: sys_open(filename: b770e6d1, flags: 0, mode: 1b6)

The reason is syscall tracing doesn't record irq_flags into buffer.
The proper display is:

 [root@jovi tracing]#echo "syscalls:sys_enter_open" > set_event
 [root@jovi tracing]# cat trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 14/14   #P:2
 #
 #                              _-----=> irqs-off
 #                             / _----=> need-resched
 #                            | / _---=> hardirq/softirq
 #                            || / _--=> preempt-depth
 #                            ||| /     delay
 #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
 #              | |       |   ||||       |         |
       irqbalance-514   [001] ....    46.213921: sys_open(filename: 804e1a6, flags: 0, mode: 1b6)
       irqbalance-514   [001] ....    46.214160: sys_open(filename: 804e1bb, flags: 0, mode: 1b6)
            <...>-920   [001] ....    47.307260: sys_open(filename: 4e82a0c5, flags: 80000, mode: 0)

Link: http://lkml.kernel.org/r/1365564393-10972-3-git-send-email-jovi.zhangwei@huawei.com
Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agotracing: Failed to create system directory
Steven Rostedt [Thu, 27 Jun 2013 14:58:31 +0000 (10:58 -0400)]
tracing: Failed to create system directory

commit 6e94a780374ed31b280f939d4757e8d7858dff16 upstream.

Running the following:

 # cd /sys/kernel/debug/tracing
 # echo p:i do_sys_open > kprobe_events
 # echo p:j schedule >> kprobe_events
 # cat kprobe_events
p:kprobes/i do_sys_open
p:kprobes/j schedule
 # echo p:i do_sys_open >> kprobe_events
 # cat kprobe_events
p:kprobes/j schedule
p:kprobes/i do_sys_open
 # ls /sys/kernel/debug/tracing/events/kprobes/
enable  filter  j

Notice that the 'i' is missing from the kprobes directory.

The console produces:

"Failed to create system directory kprobes"

This is because kprobes passes in a allocated name for the system
and the ftrace event subsystem saves off that name instead of creating
a duplicate for it. But the kprobes may free the system name making
the pointer to it invalid.

This bug was introduced by 92edca073c37 "tracing: Use direct field, type
and system names" which switched from using kstrdup() on the system name
in favor of just keeping apointer to it, as the internal ftrace event
system names are static and exist for the life of the computer being booted.

Instead of reverting back to duplicating system names again, we can use
core_kernel_data() to determine if the passed in name was allocated or
static. Then use the MSB of the ref_count to be a flag to keep track if
the name was allocated or not. Then we can still save from having to duplicate
strings that will always exist, but still copy the ones that may be freed.

Reported-by: "zhangwei(Jovi)" <jovi.zhangwei@huawei.com>
Reported-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Tested-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoperf: Fix perf_lock_task_context() vs RCU
Peter Zijlstra [Fri, 12 Jul 2013 09:08:33 +0000 (11:08 +0200)]
perf: Fix perf_lock_task_context() vs RCU

commit 058ebd0eba3aff16b144eabf4510ed9510e1416e upstream.

Jiri managed to trigger this warning:

 [] ======================================================
 [] [ INFO: possible circular locking dependency detected ]
 [] 3.10.0+ #228 Tainted: G        W
 [] -------------------------------------------------------
 [] p/6613 is trying to acquire lock:
 []  (rcu_node_0){..-...}, at: [<ffffffff810ca797>] rcu_read_unlock_special+0xa7/0x250
 []
 [] but task is already holding lock:
 []  (&ctx->lock){-.-...}, at: [<ffffffff810f2879>] perf_lock_task_context+0xd9/0x2c0
 []
 [] which lock already depends on the new lock.
 []
 [] the existing dependency chain (in reverse order) is:
 []
 [] -> #4 (&ctx->lock){-.-...}:
 [] -> #3 (&rq->lock){-.-.-.}:
 [] -> #2 (&p->pi_lock){-.-.-.}:
 [] -> #1 (&rnp->nocb_gp_wq[1]){......}:
 [] -> #0 (rcu_node_0){..-...}:

Paul was quick to explain that due to preemptible RCU we cannot call
rcu_read_unlock() while holding scheduler (or nested) locks when part
of the read side critical section was preemptible.

Therefore solve it by making the entire RCU read side non-preemptible.

Also pull out the retry from under the non-preempt to play nice with RT.

Reported-by: Jiri Olsa <jolsa@redhat.com>
Helped-out-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoperf: Remove WARN_ON_ONCE() check in __perf_event_enable() for valid scenario
Jiri Olsa [Tue, 9 Jul 2013 15:44:11 +0000 (17:44 +0200)]
perf: Remove WARN_ON_ONCE() check in __perf_event_enable() for valid scenario

commit 06f417968beac6e6b614e17b37d347aa6a6b1d30 upstream.

The '!ctx->is_active' check has a valid scenario, so
there's no need for the warning.

The reason is that there's a time window between the
'ctx->is_active' check in the perf_event_enable() function
and the __perf_event_enable() function having:

  - IRQs on
  - ctx->lock unlocked

where the task could be killed and 'ctx' deactivated by
perf_event_exit_task(), ending up with the warning below.

So remove the WARN_ON_ONCE() check and add comments to
explain it all.

This addresses the following warning reported by Vince Weaver:

[  324.983534] ------------[ cut here ]------------
[  324.984420] WARNING: at kernel/events/core.c:1953 __perf_event_enable+0x187/0x190()
[  324.984420] Modules linked in:
[  324.984420] CPU: 19 PID: 2715 Comm: nmi_bug_snb Not tainted 3.10.0+ #246
[  324.984420] Hardware name: Supermicro X8DTN/X8DTN, BIOS 4.6.3 01/08/2010
[  324.984420]  0000000000000009 ffff88043fce3ec8 ffffffff8160ea0b ffff88043fce3f00
[  324.984420]  ffffffff81080ff0 ffff8802314fdc00 ffff880231a8f800 ffff88043fcf7860
[  324.984420]  0000000000000286 ffff880231a8f800 ffff88043fce3f10 ffffffff8108103a
[  324.984420] Call Trace:
[  324.984420]  <IRQ>  [<ffffffff8160ea0b>] dump_stack+0x19/0x1b
[  324.984420]  [<ffffffff81080ff0>] warn_slowpath_common+0x70/0xa0
[  324.984420]  [<ffffffff8108103a>] warn_slowpath_null+0x1a/0x20
[  324.984420]  [<ffffffff81134437>] __perf_event_enable+0x187/0x190
[  324.984420]  [<ffffffff81130030>] remote_function+0x40/0x50
[  324.984420]  [<ffffffff810e51de>] generic_smp_call_function_single_interrupt+0xbe/0x130
[  324.984420]  [<ffffffff81066a47>] smp_call_function_single_interrupt+0x27/0x40
[  324.984420]  [<ffffffff8161fd2f>] call_function_single_interrupt+0x6f/0x80
[  324.984420]  <EOI>  [<ffffffff816161a1>] ? _raw_spin_unlock_irqrestore+0x41/0x70
[  324.984420]  [<ffffffff8113799d>] perf_event_exit_task+0x14d/0x210
[  324.984420]  [<ffffffff810acd04>] ? switch_task_namespaces+0x24/0x60
[  324.984420]  [<ffffffff81086946>] do_exit+0x2b6/0xa40
[  324.984420]  [<ffffffff8161615c>] ? _raw_spin_unlock_irq+0x2c/0x30
[  324.984420]  [<ffffffff81087279>] do_group_exit+0x49/0xc0
[  324.984420]  [<ffffffff81096854>] get_signal_to_deliver+0x254/0x620
[  324.984420]  [<ffffffff81043057>] do_signal+0x57/0x5a0
[  324.984420]  [<ffffffff8161a164>] ? __do_page_fault+0x2a4/0x4e0
[  324.984420]  [<ffffffff8161665c>] ? retint_restore_args+0xe/0xe
[  324.984420]  [<ffffffff816166cd>] ? retint_signal+0x11/0x84
[  324.984420]  [<ffffffff81043605>] do_notify_resume+0x65/0x80
[  324.984420]  [<ffffffff81616702>] retint_signal+0x46/0x84
[  324.984420] ---[ end trace 442ec2f04db3771a ]---

Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1373384651-6109-2-git-send-email-jolsa@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoperf: Clone child context from parent context pmu
Jiri Olsa [Tue, 9 Jul 2013 15:44:10 +0000 (17:44 +0200)]
perf: Clone child context from parent context pmu

commit 734df5ab549ca44f40de0f07af1c8803856dfb18 upstream.

Currently when the child context for inherited events is
created, it's based on the pmu object of the first event
of the parent context.

This is wrong for the following scenario:

  - HW context having HW and SW event
  - HW event got removed (closed)
  - SW event stays in HW context as the only event
    and its pmu is used to clone the child context

The issue starts when the cpu context object is touched
based on the pmu context object (__get_cpu_context). In
this case the HW context will work with SW cpu context
ending up with following WARN below.

Fixing this by using parent context pmu object to clone
from child context.

Addresses the following warning reported by Vince Weaver:

[ 2716.472065] ------------[ cut here ]------------
[ 2716.476035] WARNING: at kernel/events/core.c:2122 task_ctx_sched_out+0x3c/0x)
[ 2716.476035] Modules linked in: nfsd auth_rpcgss oid_registry nfs_acl nfs locn
[ 2716.476035] CPU: 0 PID: 3164 Comm: perf_fuzzer Not tainted 3.10.0-rc4 #2
[ 2716.476035] Hardware name: AOpen   DE7000/nMCP7ALPx-DE R1.06 Oct.19.2012, BI2
[ 2716.476035]  0000000000000000 ffffffff8102e215 0000000000000000 ffff88011fc18
[ 2716.476035]  ffff8801175557f0 0000000000000000 ffff880119fda88c ffffffff810ad
[ 2716.476035]  ffff880119fda880 ffffffff810af02a 0000000000000009 ffff880117550
[ 2716.476035] Call Trace:
[ 2716.476035]  [<ffffffff8102e215>] ? warn_slowpath_common+0x5b/0x70
[ 2716.476035]  [<ffffffff810ab2bd>] ? task_ctx_sched_out+0x3c/0x5f
[ 2716.476035]  [<ffffffff810af02a>] ? perf_event_exit_task+0xbf/0x194
[ 2716.476035]  [<ffffffff81032a37>] ? do_exit+0x3e7/0x90c
[ 2716.476035]  [<ffffffff810cd5ab>] ? __do_fault+0x359/0x394
[ 2716.476035]  [<ffffffff81032fe6>] ? do_group_exit+0x66/0x98
[ 2716.476035]  [<ffffffff8103dbcd>] ? get_signal_to_deliver+0x479/0x4ad
[ 2716.476035]  [<ffffffff810ac05c>] ? __perf_event_task_sched_out+0x230/0x2d1
[ 2716.476035]  [<ffffffff8100205d>] ? do_signal+0x3c/0x432
[ 2716.476035]  [<ffffffff810abbf9>] ? ctx_sched_in+0x43/0x141
[ 2716.476035]  [<ffffffff810ac2ca>] ? perf_event_context_sched_in+0x7a/0x90
[ 2716.476035]  [<ffffffff810ac311>] ? __perf_event_task_sched_in+0x31/0x118
[ 2716.476035]  [<ffffffff81050dd9>] ? mmdrop+0xd/0x1c
[ 2716.476035]  [<ffffffff81051a39>] ? finish_task_switch+0x7d/0xa6
[ 2716.476035]  [<ffffffff81002473>] ? do_notify_resume+0x20/0x5d
[ 2716.476035]  [<ffffffff813654f5>] ? retint_signal+0x3d/0x78
[ 2716.476035] ---[ end trace 827178d8a5966c3d ]---

Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1373384651-6109-1-git-send-email-jolsa@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agostaging: line6: Fix unlocked snd_pcm_stop() call
Takashi Iwai [Thu, 11 Jul 2013 16:02:38 +0000 (18:02 +0200)]
staging: line6: Fix unlocked snd_pcm_stop() call

commit 86f0b5b86d142b9323432fef078a6cf0fb5dda74 upstream.

snd_pcm_stop() must be called in the PCM substream lock context.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoMIPS: Octeon: Don't clobber bootloader data structures.
David Daney [Wed, 12 Jun 2013 17:28:33 +0000 (17:28 +0000)]
MIPS: Octeon: Don't clobber bootloader data structures.

commit d949b4fe6d23dd92b5fa48cbf7af90ca32beed2e upstream.

Commit abe77f90dc (MIPS: Octeon: Add kexec and kdump support) added a
bootmem region for the kernel image itself.  The problem is that this
is rounded up to a 0x100000 boundary, which is memory that may not be
owned by the kernel.  Depending on the kernel's configuration based
size, this 'extra' memory may contain data passed from the bootloader
to the kernel itself, which if clobbered makes the kernel crash in
various ways.

The fix: Quit rounding the size up, so that we only use memory
assigned to the kernel.

Signed-off-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/5449/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agothermal: cpu_cooling: fix stub function
Arnd Bergmann [Fri, 5 Jul 2013 15:40:13 +0000 (17:40 +0200)]
thermal: cpu_cooling: fix stub function

commit e8d39240d635ed9bcaddbec898b1c9f063c5dbb2 upstream.

The function stub for cpufreq_cooling_get_level introduced
in 57df81069 "Thermal: exynos: fix cooling state translation"
is not syntactically correct C and needs to be fixed to avoid
this error:

In file included from drivers/thermal/db8500_thermal.c:20:0:
 include/linux/cpu_cooling.h: In function 'cpufreq_cooling_get_level':
include/linux/cpu_cooling.h:57:1:
 error: parameter name omitted  unsigned long cpufreq_cooling_get_level(unsigned int, unsigned int)  ^
 include/linux/cpu_cooling.h:57:1: error: parameter name omitted

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Eduardo Valentin <eduardo.valentin@ti.com>
Cc: Zhang Rui <rui.zhang@intel.com>
Cc: Amit Daniel kachhap <amit.daniel@samsung.com>
Signed-off-by: Eduardo Valentin <eduardo.valentin@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoxtensa: adjust boot parameters address when INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX...
Max Filippov [Sun, 9 Jun 2013 00:52:11 +0000 (04:52 +0400)]
xtensa: adjust boot parameters address when INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX is selected

commit c5a771d0678f9613e9f89cf1a5bdcfa5b08b225b upstream.

The virtual address of boot parameters chain is passed to the kernel via
a2 register. Adjust it in case it is remapped during MMUv3 -> MMUv2
mapping change, i.e. when it is in the first 128M.

Also fix interpretation of initrd and FDT addresses passed in the boot
parameters: these are physical addresses.

Reported-by: Baruch Siach <baruch@tkos.co.il>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Chris Zankel <chris@zankel.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoiommu/amd: Only unmap large pages from the first pte
Alex Williamson [Fri, 21 Jun 2013 20:33:19 +0000 (14:33 -0600)]
iommu/amd: Only unmap large pages from the first pte

commit 60d0ca3cfd199b6612bbbbf4999a3470dad38bb1 upstream.

If we use a large mapping, the expectation is that only unmaps from
the first pte in the superpage are supported.  Unmaps from offsets
into the superpage should fail (ie. return zero sized unmap).  In the
current code, unmapping from an offset clears the size of the full
mapping starting from an offset.  For instance, if we map a 16k
physically contiguous range at IOVA 0x0 with a large page, then
attempt to unmap 4k at offset 12k, 4 ptes are cleared (12k - 28k) and
the unmap returns 16k unmapped.  This potentially incorrectly clears
valid mappings and confuses drivers like VFIO that use the unmap size
to release pinned pages.

Fix by refusing to unmap from offsets into the page.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agodrm/nv50-/disp: Use output specific mask in interrupt
Emil Velikov [Tue, 2 Jul 2013 13:44:12 +0000 (14:44 +0100)]
drm/nv50-/disp: Use output specific mask in interrupt

commit 378f2bcdf7c971453d11580936dc0ffe845f5880 upstream.

The commit

   commit 476e84e126171d809f9c0b5d97137f5055f95ca8
   Author: Ben Skeggs <bskeggs@redhat.com>
   Date:   Mon Feb 11 09:24:23 2013 +1000

       drm/nv50-/disp: initial supervisor support for off-chip encoders

changed the write mask in one of the interrupt functions for on-chip encoders,
causing a regression in certain VGA dual-head setups. This commit reintroduces
the mask thus resolving the regression

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=66129
Reported-and-Tested-by: Yves-Alexis <corsac@debian.org>
CC: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agodrm/nva3/disp: Fix HDMI audio regression
Ilia Mirkin [Wed, 3 Jul 2013 07:06:02 +0000 (03:06 -0400)]
drm/nva3/disp: Fix HDMI audio regression

commit bf03d1b293cc556df53545e318110505014d805e upstream.

This is the nva3 counterpart to commit beba44b17 (drm/nv84/disp: Fix
HDMI audio regression). The regression happened as a result of
refactoring in commit 8e9e3d2de (drm/nv84/disp: move hdmi control into
core).

Reported-and-tested-by: Max Baldwin <archerseven@gmail.com>
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agodrm/radeon: add backlight quirk for hybrid mac
Alex Deucher [Mon, 10 Jun 2013 13:57:07 +0000 (09:57 -0400)]
drm/radeon: add backlight quirk for hybrid mac

commit 80101790670385a85aca35ecae4b89e3f2fceecc upstream.

Mac laptops with multiple GPUs apparently use the gmux
driver for backlight control.  Don't register a radeon
backlight interface.  We may need to add other pci ids
for other hybrid mac laptops.

Fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=65377

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agodrm/radeon: fix AVI infoframe generation
Alex Deucher [Fri, 7 Jun 2013 14:41:03 +0000 (10:41 -0400)]
drm/radeon: fix AVI infoframe generation

commit f100380ecd8287b0909d3c5694784adc46e78a4a upstream.

- remove adding 2 to checksum, this is incorrect.

This was incorrectly introduced in:
92db7f6c860b8190571a9dc1fcbc16d003422fe8
http://lists.freedesktop.org/archives/dri-devel/2011-December/017717.html
However, the off by 2 was due to adding the version twice.
From the examples in the URL above:

[RafaÅ‚ MiÅ‚ecki][RV620] fglrx:
0x7454: 00 A8 5E 79     R600_HDMI_VIDEOINFOFRAME_0
0x7458: 00 28 00 10     R600_HDMI_VIDEOINFOFRAME_1
0x745C: 00 48 00 28     R600_HDMI_VIDEOINFOFRAME_2
0x7460: 02 00 00 48     R600_HDMI_VIDEOINFOFRAME_3
===================
(0x82 + 0x2 + 0xD) + 0x1F8 = 0x289
-0x289 = 0x77

However, the payload sum is not 0x1f8, it's 0x1f6.
00 + A8 + 5E + 00 +
00 + 28 + 00 + 10 +
00 + 48 + 00 + 28 +
00 + 48 =
0x1f6

Bits 25:24 of HDMI_VIDEOINFOFRAME_3 are the packet version, not part
of the payload.  So the total would be:
(0x82 + 0x2 + 0xD) + 0x1f6 = 0x287
-0x287 = 0x79

- properly emit the AVI infoframe version.  This was not being
emitted previous which is probably what caused the issue above.

This should fix blank screen when HDMI audio is enabled on
certain monitors.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agodrm/nouveau: use vmalloc for pgt allocation
Marcin Slusarz [Tue, 11 Jun 2013 08:50:30 +0000 (10:50 +0200)]
drm/nouveau: use vmalloc for pgt allocation

commit d005f51eb93d71cd40ebd11dd377453fa8c8a42a upstream.

Page tables on nv50 take 48kB, which can be hard to allocate in one piece.
Let's use vmalloc.

Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agodrm/mgag200: Added resolution and bandwidth limits for various G200e products.
Julia Lemire [Thu, 27 Jun 2013 17:38:59 +0000 (13:38 -0400)]
drm/mgag200: Added resolution and bandwidth limits for various G200e products.

commit abbee6238775c6633a3779962e9e5b5cb9823749 upstream.

At the larger resolutions, the g200e series sometimes struggles with
maintaining a proper output.  Problems like flickering or black bands appearing
on screen can occur.  In order to avoid this, limitations regarding resolutions
and bandwidth have been added for the different variations of the g200e series.
This code was ported from the old xorg mga driver.

Signed-off-by: Julia Lemire <jlemire@matrox.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agodrm/gem: fix not to assign error value to gem name
YoungJun Cho [Wed, 26 Jun 2013 23:58:33 +0000 (08:58 +0900)]
drm/gem: fix not to assign error value to gem name

commit 2e07fb229396f99fc173d8612f0f83ea9de0341b upstream.

If idr_alloc() is failed, obj->name can be error value. Also
it cleans up duplicated flink processing code.

This regression has been introduced in

commit 2e928815c1886fe628ed54623aa98d0889cf5509
Author: Tejun Heo <tj@kernel.org>
Date:   Wed Feb 27 17:04:08 2013 -0800

    drm: convert to idr_alloc()

Signed-off-by: YoungJun Cho <yj44.cho@samsung.com>
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agodrm/i915: Only clear write-domains after a successful wait-seqno
Chris Wilson [Fri, 28 Jun 2013 15:54:08 +0000 (16:54 +0100)]
drm/i915: Only clear write-domains after a successful wait-seqno

commit daa13e1ca587bc773c1aae415ed1af6554117bd4 upstream.

In the introduction of the non-blocking wait, I cut'n'pasted the wait
completion code from normal locked path. Unfortunately, this neglected
that the normal path returned early if the wait returned early. The
result is that read-only waits may return whilst the GPU is still
writing to the bo.

Fixes regression from
commit 3236f57a0162391f84b93f39fc1882c49a8998c7 [v3.7]
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Fri Aug 24 09:35:09 2012 +0100

    drm/i915: Use a non-blocking wait for set-to-domain ioctl

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=66163
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agodrm/i915: Fix context sizes on HSW
Ben Widawsky [Wed, 26 Jun 2013 04:53:40 +0000 (21:53 -0700)]
drm/i915: Fix context sizes on HSW

commit a0de80a0e07032a111230ec92eca563f9d93648d upstream.

With updates to the spec, we can actually see the context layout, and
how many dwords are allocated. That table suggests we need 70720 bytes
per HW context. Rounded up, this is 18 pages. Looking at what lives
after the current 4 pages we use, I can't see too much important (mostly
it's d3d related), but there are a couple of things which look scary. I
am hopeful this can explain some of our odd HSW failures.

v2: Make the context only 17 pages. The power context space isn't used
ever, and execlists aren't used in our driver, making the actual total
66944 bytes.

v3: Add a comment to the code. (Jesse & Paulo)

Reported-by: "Azad, Vinit" <vinit.azad@intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agodrm/i915: Fix up sdvo hpd pins for i965g/gm
Daniel Vetter [Mon, 24 Jun 2013 19:33:28 +0000 (21:33 +0200)]
drm/i915: Fix up sdvo hpd pins for i965g/gm

commit 4f7fd7095d85cd31c86cb9ba87bc301319630ccc upstream.

Bspec seems to be full of lies, at least it disagress with reality:
Two systems corrobated that SDVO hpd bits are the same as on gen3.

v2: Update comment a bit.

Tested-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Arthur Ranyan <arthur.j.runyan@intel.com>
Reported-and-tested-by: Alex Fiestas <afiestas@kde.org>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=58405
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoof: Fix address decoding on Bimini and js2x machines
Benjamin Herrenschmidt [Wed, 3 Jul 2013 06:01:10 +0000 (16:01 +1000)]
of: Fix address decoding on Bimini and js2x machines

commit 6dd18e4684f3d188277bbbc27545248487472108 upstream.

 Commit:

  e38c0a1fbc5803cbacdaac0557c70ac8ca5152e7
  of/address: Handle #address-cells > 2 specially

broke real time clock access on Bimini, js2x, and similar powerpc
machines using the "maple" platform. That code was indirectly relying
on the old (broken) behaviour of the translation for the hypertransport
to ISA bridge.

This fixes it by treating hypertransport as a PCI bus

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
Cc: Jonghwan Choi <jhbird.choi@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agosvcrpc: don't error out on small tcp fragment
J. Bruce Fields [Wed, 26 Jun 2013 14:55:40 +0000 (10:55 -0400)]
svcrpc: don't error out on small tcp fragment

commit 1f691b07c5dc51b2055834f58c0f351defd97f27 upstream.

Though clients we care about mostly don't do this, it is possible for
rpc requests to be sent in multiple fragments.  Here we have a sanity
check to ensure that the final received rpc isn't too small--except that
the number we're actually checking is the length of just the final
fragment, not of the whole rpc.  So a perfectly legal rpc that's
unluckily fragmented could cause the server to close the connection
here.

Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agosvcrpc: fix handling of too-short rpc's
J. Bruce Fields [Wed, 26 Jun 2013 15:09:06 +0000 (11:09 -0400)]
svcrpc: fix handling of too-short rpc's

commit cf3aa02cb4a0c5af5557dd47f15a08a7df33182a upstream.

If we detect that an rpc is too short, we abort and close the
connection.  Except, there's a bug here: we're leaving sk_datalen
nonzero without leaving any pages in the sk_pages array.  The most
likely result of the inconsistency is a subsequent crash in
svc_tcp_clear_pages.

Also demote the BUG_ON in svc_tcp_clear_pages to a WARN.

Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agosvcrpc: fix failures to handle -1 uid's
J. Bruce Fields [Mon, 8 Jul 2013 17:44:45 +0000 (13:44 -0400)]
svcrpc: fix failures to handle -1 uid's

commit 0979292bfa301cb87d936b69af428090d2feea1b upstream.

As of f025adf191924e3a75ce80e130afcd2485b53bb8 "sunrpc: Properly decode
kuids and kgids in RPC_AUTH_UNIX credentials" any rpc containing a -1
(0xffff) uid or gid would fail with a badcred error.

Commit afe3c3fd5392b2f0066930abc5dbd3f4b14a0f13 "svcrpc: fix failures to
handle -1 uid's and gid's" fixed part of the problem, but overlooked the
gid upcall--the kernel can request supplementary gid's for the -1 uid,
but mountd's attempt write a response will get -EINVAL.

Symptoms were nfsd failing to reply to the first attempt to use a newly
negotiated krb5 context.

Reported-by: Sven Geggus <lists@fuchsschwanzdomain.de>
Tested-by: Sven Geggus <lists@fuchsschwanzdomain.de>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agouprobes: Fix return value in error handling path
zhangwei(Jovi) [Thu, 13 Jun 2013 06:21:51 +0000 (14:21 +0800)]
uprobes: Fix return value in error handling path

commit fa44063f9ef163c3a4c8d8c0465bb8a056b42035 upstream.

When wrong argument is passed into uprobe_events it does not return
an error:

[root@jovi tracing]# echo 'p:myprobe /bin/bash' > uprobe_events
[root@jovi tracing]#

The proper response is:

[root@jovi tracing]# echo 'p:myprobe /bin/bash' > uprobe_events
-bash: echo: write error: Invalid argument

Link: http://lkml.kernel.org/r/51B964FF.5000106@huawei.com
Signed-off-by: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: <srikar@linux.vnet.ibm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoprintk: Fix rq->lock vs logbuf_lock unlock lock inversion
Bu, Yitian [Mon, 18 Feb 2013 12:53:37 +0000 (12:53 +0000)]
printk: Fix rq->lock vs logbuf_lock unlock lock inversion

commit dbda92d16f8655044e082930e4e9d244b87fde77 upstream.

commit 07354eb1a74d1 ("locking printk: Annotate logbuf_lock as raw")
reintroduced a lock inversion problem which was fixed in commit
0b5e1c5255 ("printk: Release console_sem after logbuf_lock"). This
happened probably when fixing up patch rejects.

Restore the ordering and unlock logbuf_lock before releasing
console_sem.

Signed-off-by: ybu <ybu@qti.qualcomm.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/E807E903FE6CBE4D95E420FBFCC273B827413C@nasanexd01h.na.qualcomm.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agomac80211: close AP_VLAN interfaces before unregistering all
Johannes Berg [Thu, 23 May 2013 23:06:09 +0000 (01:06 +0200)]
mac80211: close AP_VLAN interfaces before unregistering all

commit 4c8a9d4bfaf7dbc7d2168494904d79d22cc01db7 upstream.

Since Eric's commit efe117ab8 ("Speedup ieee80211_remove_interfaces")
there's a bug in mac80211 when it unregisters with AP_VLAN interfaces
up. If the AP_VLAN interface was registered after the AP it belongs
to (which is the typical case) and then we get into this code path,
unregister_netdevice_many() will crash because it isn't prepared to
deal with interfaces being closed in the middle of it. Exactly this
happens though, because we iterate the list, find the AP master this
AP_VLAN belongs to and dev_close() the dependent VLANs. After this,
unregister_netdevice_many() won't pick up the fact that the AP_VLAN
is already down and will do it again, causing a crash.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agob43: ensue that BCMA is "y" when B43 is "y"
Hauke Mehrtens [Sun, 9 Jun 2013 16:53:58 +0000 (18:53 +0200)]
b43: ensue that BCMA is "y" when B43 is "y"

commit 693026ef2e751fd94d2e6c71028e68343cc875d5 upstream.

When b43 gets build into the kernel and it should use bcma we have to
ensure that bcma was also build into the kernel and not as a module.
In this patch this is also done for SSB, although you can not
build b43 without ssb support for now.

This fixes a build problem reported by Randy Dunlap in
5187EB95.2060605@infradead.org

Reported-By: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agovirtio_balloon: leak_balloon(): only tell host if we got pages deflated
Luiz Capitulino [Tue, 2 Jul 2013 06:05:13 +0000 (15:35 +0930)]
virtio_balloon: leak_balloon(): only tell host if we got pages deflated

commit 8c6bab4f3874d31804a00782c48a8f244a0d3cc0 upstream.

balloon_page_dequeue() can return NULL.  If it does for the first page
being freed then leak_balloon() will create a scatter list with len=0.
Which in turn seems to generate an invalid virtio request.

I didn't get this in practice, I found it by code review.  On the other
hand, such an invalid virtio request will cause errors in QEMU and
fill_balloon() also performs the same check implemented by this commit.

This bug was introduced in e2250429.

Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Acked-by: Rafael Aquini <aquini@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoSCSI: mpt2sas: Fix for issue Missing delay not getting set during system bootup
Reddy, Sreekanth [Tue, 26 Feb 2013 11:29:59 +0000 (16:59 +0530)]
SCSI: mpt2sas: Fix for issue Missing delay not getting set during system bootup

commit b0df96a0068daee4f9c2189c29b9053eb6e46b17 upstream.

Missing delay is not getting set properly. The reason is that it is not
defined in the same file from where it is being invoked.  The fix is to move
the missing delay module parameter from mpt2sas_base.c to mpt2sas_scsh.c.

Signed-off-by: Sreekanth Reddy <Sreekanth.Reddy@lsi.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoSCSI: mpt2sas: fix firmware failure with wrong task attribute
Sreekanth Reddy [Fri, 1 Feb 2013 19:28:20 +0000 (00:58 +0530)]
SCSI: mpt2sas: fix firmware failure with wrong task attribute

commit 48ba2efc382f94fae16ca8ca011e5961a81ad1ea upstream.

When SCSI command is received with task attribute not set, set it to SIMPLE.
Previously it is set to untagged. This causes the firmware to fail the commands.

Signed-off-by: Sreekanth Reddy <Sreekanth.Reddy@lsi.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoSCSI: zfcp: status read buffers on first adapter open with link down
Steffen Maier [Fri, 26 Apr 2013 15:34:54 +0000 (17:34 +0200)]
SCSI: zfcp: status read buffers on first adapter open with link down

commit 9edf7d75ee5f21663a0183d21f702682d0ef132f upstream.

Commit 64deb6efdc5504ce97b5c1c6f281fffbc150bd93
"[SCSI] zfcp: Use status_read_buf_num provided by FCP channel"
started using a value returned by the channel but only evaluated the value
if the fabric link is up.
Commit 8d88cf3f3b9af4713642caeb221b6d6a42019001
"[SCSI] zfcp: Update status read mempool"
introduced mempool resizings based on the above value.
On setting an FCP device online for the very first time since boot, a new
zeroed adapter object is allocated. If the link is down, the number of
status read requests remains zero. Since just the config data exchange is
incomplete, we proceed with adapter open recovery. However, we
unconditionally call mempool_resize with adapter->stat_read_buf_num == 0 in
this case.

This causes a kernel message "kernel BUG at mm/mempool.c:131!" in process
"zfcperp<FCP-device-bus-ID>" with last function mempool_resize in Krnl PSW
and zfcp_erp_thread in the Call Trace.

Don't evaluate channel values which are invalid on link down. The number of
status read requests is always valid, evaluated, and set to a positive
minimum greater than zero. The adapter open recovery can proceed and the
channel has status read buffers to inform us on a future link up event.
While we are not aware of any other code path that could result in mempool
resize attempts of size zero, we still also initialize the number of status
read buffers to be posted to a static minimum number on adapter object
allocation.

Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoSCSI: zfcp: block queue limits with data router
Steffen Maier [Fri, 26 Apr 2013 15:33:45 +0000 (17:33 +0200)]
SCSI: zfcp: block queue limits with data router

commit 5fea4291deacd80188b996d2f555fc6a1940e5d4 upstream.

Commit 86a9668a8d29ea711613e1cb37efa68e7c4db564
"[SCSI] zfcp: support for hardware data router"
reduced the initial block queue limits in the scsi_host_template to the
absolute minimum and adjusted them later on. However, the adjustment was
too late for the BSG devices of Scsi_Host and fc_host.

Therefore, ioctl(..., SG_IO, ...) with request or response size > 4kB to a
BSG device of an fc_host or a Scsi_Host fails with EINVAL. As a result,
users of such ioctl such as HBA_SendCTPassThru() in libzfcphbaapi return
with error HBA_STATUS_ERROR.

Initialize the block queue limits in zfcp_scsi_host_template to the
greatest common denominator (GCD).

While we cannot exploit the slightly enlarged maximum request size with
data router, this should be neglectible. Doing so also avoids running into
trouble after live guest relocation (LGR) / migration from a data router
FCP device to an FCP device that does not support data router. In that
case, zfcp would figure out the new limits on adapter recovery, but the
fc_host and Scsi_Host (plus in fact all sdevs) still exist with the old and
now too large queue limits.

It should also OK, not to use half the size as in the DIX case, because
fc_host and Scsi_Host do not transport FCP requests including SCSI commands
using protection data.

Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Reviewed-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoSCSI: zfcp: fix adapter (re)open recovery while link to SAN is down
Daniel Hansel [Fri, 26 Apr 2013 15:32:14 +0000 (17:32 +0200)]
SCSI: zfcp: fix adapter (re)open recovery while link to SAN is down

commit f76ccaac4f82c463a037aa4a1e4ccb85c7011814 upstream.

FCP device remains in status ERP_FAILED when device is switched online
or adapter recovery is triggered  while link to SAN is down.

When Exchange Configuration Data command returns the FSF status
FSF_EXCHANGE_CONFIG_DATA_INCOMPLETE it aborts the exchange process.
The only retries are done during the common error recovery procedure
(i.e. max. 3 retries with 8sec sleep between) and remains in status
ERP_FAILED with QDIO down.

This commit reverts the commit 0df138476c8306478d6e726f044868b4bccf411c
(zfcp: Fix adapter activation on link down).
When FSF status FSF_EXCHANGE_CONFIG_DATA_INCOMPLETE is received the
adapter recovery will be finished without any retries. QDIO will be
up now and status changes such as LINK UP will be received now.

Signed-off-by: Daniel Hansel <daniel.hansel@linux.vnet.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoSCSI: aacraid: Fix for arrays are going offline in the system. System hangs
Mahesh Rajashekhara [Tue, 18 Jun 2013 11:32:07 +0000 (17:02 +0530)]
SCSI: aacraid: Fix for arrays are going offline in the system. System hangs

commit c5bebd829dd95602c15f8da8cc50fa938b5e0254 upstream.

One of the customer had reported that the set of raid logical arrays will
become unavailable (I/O offline) after a long hours of IO stress test.  The OS
wouldn`t be accessible afterwards and require a hard reset.

This driver patch has a fix for race condition between the doorbell and the
circular buffer. The driver is modified to do an extra read after clearing the
doorbell in case there had been a completion posted during the small timing
window.

With this fix, we ran IO stress for ~13 days. There were no IO failures.

Signed-off-by: Mahesh Rajashekhara <Mahesh.Rajashekhara@pmcs.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoSCSI: sd: Update WRITE SAME heuristics
Martin K. Petersen [Fri, 7 Jun 2013 02:15:55 +0000 (22:15 -0400)]
SCSI: sd: Update WRITE SAME heuristics

commit 66c28f97120e8a621afd5aa7a31c4b85c547d33d upstream.

SATA drives located behind a SAS controller would incorrectly receive
WRITE SAME commands. Tweak the heuristics so that:

 - If REPORT SUPPORTED OPERATION CODES is provided we will use that to
   choose between WRITE SAME(16), WRITE SAME(10) and disabled. This also
   fixes an issue with the old code which would issue WRITE SAME(10)
   despite the command not being whitelisted in REPORT SUPPORTED
   OPERATION CODES.

 - If REPORT SUPPORTED OPERATION CODES is not provided we will fall back
   to WRITE SAME(10) unless the device has an ATA Information VPD page.
   The assumption is that a SATL which is smart enough to implement
   WRITE SAME would also provide REPORT SUPPORTED OPERATION CODES.

To facilitate the new heuristics scsi_report_opcode() has been modified
to so we can distinguish between "operation not supported" and "RSOC not
supported".

Reported-by: H. Peter Anvin <hpa@zytor.com>
Tested-by: Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoath9k: Do not assign noise for NULL caldata
Sujith Manoharan [Mon, 10 Jun 2013 08:19:40 +0000 (13:49 +0530)]
ath9k: Do not assign noise for NULL caldata

commit d3bcb7b24bbf09fde8405770e676fe0c11c79662 upstream.

ah->noise is maintained globally and not per-channel. This
is updated in the reset() routine after the NF history has been
filled for the *current channel*, just before switching to
the new channel. There is no need to do it inside getnf(), since
ah->noise must contain a value for the new channel.

Signed-off-by: Sujith Manoharan <c_manoha@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoath9k: Fix noisefloor calibration
Sujith Manoharan [Mon, 10 Jun 2013 08:19:39 +0000 (13:49 +0530)]
ath9k: Fix noisefloor calibration

commit 696df78509d1f81b651dd98ecdc1aecab616db6b upstream.

The commits,

"ath9k: Fix regression in channelwidth switch at the same channel"
"ath9k: Fix invalid noisefloor reading due to channel update"

attempted to fix noisefloor calibration when a channel switch
happens due to HT20/HT40 bandwidth change. This is causing invalid
readings resulting in messages like:

"ath: phy16: NF[0] (-45) > MAX (-95), correcting to MAX".

This results in an incorrect noise being used initially for reporting
the signal level of received packets, until NF calibration is done
and the history buffer is updated via the ANI timer, which happens
much later.

When a bandwidth change happens, it is appropriate to reset
the internal history data for the channel. Do this correctly in the
reset() routine by checking the "chanmode" variable.

Signed-off-by: Sujith Manoharan <c_manoha@qca.qualcomm.com>
Cc: Rajkumar Manoharan <rmanohar@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoath9k_hw: Assign default xlna config for AR9485
Sujith Manoharan [Mon, 10 Jun 2013 08:19:38 +0000 (13:49 +0530)]
ath9k_hw: Assign default xlna config for AR9485

commit 30d5b709da23f4ab9836c7f66d2d2e780a69cf12 upstream.

For AR9485 boards with XLNA, the default gpio config
is not set correctly, fix this.

Signed-off-by: Sujith Manoharan <c_manoha@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agort2x00: rt2800lib: fix default TX power check for RT55xx
Gabor Juhos [Tue, 25 Jun 2013 20:57:29 +0000 (22:57 +0200)]
rt2x00: rt2800lib: fix default TX power check for RT55xx

commit 0847beb2865f5ef1c8626ec1a37def18f3d6c41a upstream.

The code writes the default_power2 value into the TX field
of the RFCSR50 register, however the condition in the if
statement uses default_power1. Due to this, wrong TX power
value might be written into the register.

Use the correct value in the condition to fix the issue.

Compile tested only.

Signed-off-by: Gabor Juhos <juhosg@openwrt.org>
Acked-by: Gertjan van Wingerde <gwingerde@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agort2x00: read 5GHz TX power values from the correct offset
Gabor Juhos [Sat, 22 Jun 2013 11:13:25 +0000 (13:13 +0200)]
rt2x00: read 5GHz TX power values from the correct offset

commit 0a6f3a8ebaf13407523c2c7d575b4ca2debd23ba upstream.

The current code uses the same index value both
for the channel information array and for the TX
power table. The index starts from 14, however the
index of the TX power table must start from zero.

Fix it, in order to get the correct TX power value
for a given channel.

The changes in rt61pci.c and rt73usb.c are compile
tested only.

Signed-off-by: Gabor Juhos <juhosg@openwrt.org>
Acked-by: Stanislaw Gruszka <stf_xl@wp.pl>
Acked-by: Gertjan van Wingerde <gwingerde@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoclocksource: dw_apb: Fix error check
Baruch Siach [Wed, 29 May 2013 08:11:17 +0000 (10:11 +0200)]
clocksource: dw_apb: Fix error check

commit 1a33bd2be705cbb3f57d7223b60baea441039307 upstream.

irq_of_parse_and_map() returns 0 on error, while the code checks for NO_IRQ.
This breaks on platforms that have NO_IRQ != 0.

Signed-off-by: Baruch Siach <baruch@tkos.co.il>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agotick: Prevent uncontrolled switch to oneshot mode
Thomas Gleixner [Mon, 1 Jul 2013 20:14:10 +0000 (22:14 +0200)]
tick: Prevent uncontrolled switch to oneshot mode

commit 1f73a9806bdd07a5106409bbcab3884078bd34fe upstream.

When the system switches from periodic to oneshot mode, the broadcast
logic causes a possibility that a CPU which has not yet switched to
oneshot mode puts its own clock event device into oneshot mode without
updating the state and the timer handler.

CPU0 CPU1
per cpu tickdev is in periodic mode
and switched to broadcast

Switch to oneshot mode
 tick_broadcast_switch_to_oneshot()
  cpumask_copy(tick_oneshot_broacast_mask,
       tick_broadcast_mask);

  broadcast device mode = oneshot

Timer interrupt

irq_enter()
 tick_check_oneshot_broadcast()
  dev->set_mode(ONESHOT);

tick_handle_periodic()
 if (dev->mode == ONESHOT)
   dev->next_event += period;
   FAIL.

We fail, because dev->next_event contains KTIME_MAX, if the device was
in periodic mode before the uncontrolled switch to oneshot happened.

We must copy the broadcast bits over to the oneshot mask, because
otherwise a CPU which relies on the broadcast would not been woken up
anymore after the broadcast device switched to oneshot mode.

So we need to verify in tick_check_oneshot_broadcast() whether the CPU
has already switched to oneshot mode. If not, leave the device
untouched and let the CPU switch controlled into oneshot mode.

This is a long standing bug, which was never noticed, because the main
user of the broadcast x86 cannot run into that scenario, AFAICT. The
nonarchitected timer mess of ARM creates a gazillion of differently
broken abominations which trigger the shortcomings of that broadcast
code, which better had never been necessary in the first place.

Reported-and-tested-by: Stehle Vincent-B46079 <B46079@freescale.com>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Cc: John Stultz <john.stultz@linaro.org>,
Cc: Mark Rutland <mark.rutland@arm.com>
Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1307012153060.4013@ionos.tec.linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agotick: Sanitize broadcast control logic
Thomas Gleixner [Mon, 1 Jul 2013 20:14:10 +0000 (22:14 +0200)]
tick: Sanitize broadcast control logic

commit 07bd1172902e782f288e4d44b1fde7dec0f08b6f upstream.

The recent implementation of a generic dummy timer resulted in a
different registration order of per cpu local timers which made the
broadcast control logic go belly up.

If the dummy timer is the first clock event device which is registered
for a CPU, then it is installed, the broadcast timer is initialized
and the CPU is marked as broadcast target.

If a real clock event device is installed after that, we can fail to
take the CPU out of the broadcast mask. In the worst case we end up
with two periodic timer events firing for the same CPU. One from the
per cpu hardware device and one from the broadcast.

Now the problem is that we have no way to distinguish whether the
system is in a state which makes broadcasting necessary or the
broadcast bit was set due to the nonfunctional dummy timer
installment.

To solve this we need to keep track of the system state seperately and
provide a more detailed decision logic whether we keep the CPU in
broadcast mode or not.

The old decision logic only clears the broadcast mode, if the newly
installed clock event device is not affected by power states.

The new logic clears the broadcast mode if one of the following is
true:

  - The new device is not affected by power states.

  - The system is not in a power state affected mode

  - The system has switched to oneshot mode. The oneshot broadcast is
    controlled from the deep idle state. The CPU is not in idle at
    this point, so it's safe to remove it from the mask.

If we clear the broadcast bit for the CPU when a new device is
installed, we also shutdown the broadcast device when this was the
last CPU in the broadcast mask.

If the broadcast bit is kept, then we leave the new device in shutdown
state and rely on the broadcast to deliver the timer interrupts via
the broadcast ipis.

Reported-and-tested-by: Stehle Vincent-B46079 <B46079@freescale.com>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Cc: John Stultz <john.stultz@linaro.org>,
Cc: Mark Rutland <mark.rutland@arm.com>
Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1307012153060.4013@ionos.tec.linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agomd/raid10: fix two problems with RAID10 resync.
NeilBrown [Tue, 16 Jul 2013 06:50:47 +0000 (16:50 +1000)]
md/raid10: fix two problems with RAID10 resync.

commit 7bb23c4934059c64cbee2e41d5d24ce122285176 upstream.

1/ When an different between blocks is found, data is copied from
   one bio to the other.  However bv_len is used as the length to
   copy and this could be zero.  So use r10_bio->sectors to calculate
   length instead.
   Using bv_len was probably always a bit dubious, but the introduction
   of bio_advance made it much more likely to be a problem.

2/ When preparing some blocks for sync, we don't set BIO_UPTODATE
   except on bios that we schedule for a read.  This ensures that
   missing/failed devices don't confuse the loop at the top of
   sync_request write.
   Commit 8be185f2c9d54d6 "raid10: Use bio_reset()"
   removed a loop which set BIO_UPTDATE on all appropriate bios.
   So we need to re-add that flag.

These bugs were introduced in 3.10, so this patch is suitable for
3.10-stable, and can remove a potential for data corruption.

Reported-by: Brassow Jonathan <jbrassow@redhat.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agomd/raid10: fix two bugs affecting RAID10 reshape.
NeilBrown [Tue, 2 Jul 2013 05:58:05 +0000 (15:58 +1000)]
md/raid10: fix two bugs affecting RAID10 reshape.

commit 78eaa0d4cbcdb345992fa3dd22b3bcbb473cc064 upstream.

1/ If a RAID10 is being reshaped to a fewer number of devices
 and is stopped while this is ongoing, then when the array is
 reassembled the 'mirrors' array will be allocated too small.
 This will lead to an access error or memory corruption.

2/ A sanity test for a reshaping RAID10 array is restarted
 is slightly incorrect.

Due to the first bug, this is suitable for any -stable
kernel since 3.5 where this code was introduced.

Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agomd/raid10: fix bug which causes all RAID10 reshapes to move no data.
NeilBrown [Thu, 4 Jul 2013 06:41:53 +0000 (16:41 +1000)]
md/raid10: fix bug which causes all RAID10 reshapes to move no data.

commit 1376512065b23f39d5f9a160948f313397dde972 upstream.

The recent comment:
commit 7e83ccbecd608b971f340e951c9e84cd0343002f
    md/raid10: Allow skipping recovery when clean arrays are assembled

Causes raid10 to skip a recovery in certain cases where it is safe to
do so.  Unfortunately it also causes a reshape to be skipped which is
never safe.  The result is that an attempt to reshape a RAID10 will
appear to complete instantly, but no data will have been moves so the
array will now contain garbage.
(If nothing is written, you can recovery by simple performing the
reverse reshape which will also complete instantly).

Bug was introduced in 3.10, so this is suitable for 3.10-stable.

Signed-off-by: NeilBrown <neilb@suse.de>
Cc: Martin Wilck <mwilck@arcor.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoASoC: sglt5000: Fix SGTL5000_PLL_FRAC_DIV_MASK
Fabio Estevam [Thu, 4 Jul 2013 23:01:03 +0000 (20:01 -0300)]
ASoC: sglt5000: Fix SGTL5000_PLL_FRAC_DIV_MASK

commit 5c78dfe87ea04b501ee000a7f03b9432ac9d008c upstream.

SGTL5000_PLL_FRAC_DIV_MASK is used to mask bits 0-10 (11 bits in total) of
register CHIP_PLL_CTRL, so fix the mask to accomodate all this bit range.

Reported-by: Oskar Schirmer <oskar@scara.com>
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoASoC: atmel: Fix unlocked snd_pcm_stop() call
Takashi Iwai [Thu, 11 Jul 2013 16:00:01 +0000 (18:00 +0200)]
ASoC: atmel: Fix unlocked snd_pcm_stop() call

commit 571185717f8d7f2a088a7ac38d94a9ad5fd9da5c upstream.

snd_pcm_stop() must be called in the PCM substream lock context.

Acked-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoASoC: s6000: Fix unlocked snd_pcm_stop() call
Takashi Iwai [Thu, 11 Jul 2013 16:00:25 +0000 (18:00 +0200)]
ASoC: s6000: Fix unlocked snd_pcm_stop() call

commit 61be2b9a18ec70f3cbe3deef7a5f77869c71b5ae upstream.

snd_pcm_stop() must be called in the PCM substream lock context.

Acked-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoi2c-piix4: Add AMD CZ SMBus device ID
Shane Huang [Mon, 3 Jun 2013 10:24:55 +0000 (18:24 +0800)]
i2c-piix4: Add AMD CZ SMBus device ID

commit b996ac90f595dda271cbd858b136b45557fc1a57 upstream.

To add AMD CZ SMBus controller device ID.

[bhelgaas: drop pci_ids.h update]
Signed-off-by: Shane Huang <shane.huang@amd.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agosata_highbank: increase retry count but shorten duration for Calxeda controller
Mark Langsdorf [Mon, 3 Jun 2013 13:22:54 +0000 (08:22 -0500)]
sata_highbank: increase retry count but shorten duration for Calxeda controller

commit ddfef5de3d716f77bad32dbbba6b280158dfd721 upstream.

Increase the retry count for the hard reset function to 100 but
shorten the time out period to 500 ms. See the comment for
ahci_highbank_hardreset for the reasons why those vaulues were
chosen.

Signed-off-by: Mark Langsdorf <mark.langsdorf@calxeda.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoata_piix: IDE-mode SATA patch for Intel Coleto Creek DeviceIDs
Seth Heasley [Wed, 19 Jun 2013 23:25:37 +0000 (16:25 -0700)]
ata_piix: IDE-mode SATA patch for Intel Coleto Creek DeviceIDs

commit c7e8695bfa0611b39493a9dfe8bab9f63f9809bd upstream.

This patch adds the IDE-mode SATA DeviceIDs for the Intel Coleto Creek PCH.

Signed-off-by: Seth Heasley <seth.heasley@intel.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agolibata: skip SRST for all SIMG [34]7x port-multipliers
Tejun Heo [Tue, 11 Jun 2013 07:11:36 +0000 (00:11 -0700)]
libata: skip SRST for all SIMG [34]7x port-multipliers

commit 7a87718d92760fc688628ad6a430643dafa16f1f upstream.

For some reason, a lot of port-multipliers have issues with softreset.
SIMG [34]7x series port-multipliers have been quite erratic in this
regard.  I recall that it was better with some firmware revisions and
the current list of quirks worked fine for a while.  I think it got
worse with later firmwares or maybe my test coverage wasn't good
enough.  Anyways, HPA is reporting that his 3726 setup suffers SRST
failures and then the PMP gets confused and fails to probe the last
port.

The hope was that we try to stick to the standard as much as possible
and soonish the PMPs and their firmwares will improve in quality, so
the quirk list was kept to minimum.  Well, it seems like that's never
gonna happen.

Let's set NO_SRST for all [34]7x PMPs so that whatever remaining
userbase of the device suffer the least.  Maybe we should do the same
for 57xx's but unfortunately I don't have any device left to test and
I'm not even sure 57xx's have ever been made widely available, so
let's leave those alone for now.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agolibata-zpodd: must use ata_tf_init()
Sergei Shtylyov [Sun, 23 Jun 2013 19:25:04 +0000 (23:25 +0400)]
libata-zpodd: must use ata_tf_init()

commit d0887c43f51c308b01605346e55d906ba858a6f9 upstream.

There are  some SATA controllers which have both devices 0 and 1 but this module
just zeroes out taskfile and sets then ATA_TFLAG_DEVICE (not sure that's needed)
which could  lead to a wrong device being selected just before issuing command.
Thus we should  call ata_tf_init()  which sets  up the device register value
properly, like  all other users of ata_exec_internal() do...

Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agohwmon: (nct6775) Drop unsupported fan alarm attributes for NCT6775
Guenter Roeck [Sun, 23 Jun 2013 20:04:04 +0000 (13:04 -0700)]
hwmon: (nct6775) Drop unsupported fan alarm attributes for NCT6775

commit 41fa9a944fce1d7efd5ee3d50ac85b92f42dcc3d upstream.

NCT6775 does not support alarms for fans 4 and 5. Drop the attributes.

Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agohwmon: (nct6775) Fix temperature alarm attributes
Guenter Roeck [Sat, 22 Jun 2013 23:15:31 +0000 (16:15 -0700)]
hwmon: (nct6775) Fix temperature alarm attributes

commit b1d2bff6a61140454b9d203519cc686a2e9ef32f upstream.

Driver displays wrong alarms for temperature attributes.

Turns out that temperature alarm bits are not fixed, but determined
by temperature source mapping. To fix the problem, walk through
the temperature sources to determine the correct alarm bit associated
with a given attribute.

Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoALSA: usx2y: Fix unlocked snd_pcm_stop() call
Takashi Iwai [Thu, 11 Jul 2013 15:58:47 +0000 (17:58 +0200)]
ALSA: usx2y: Fix unlocked snd_pcm_stop() call

commit 5be1efb4c2ed79c3d7c0cbcbecae768377666e84 upstream.

snd_pcm_stop() must be called in the PCM substream lock context.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoALSA: asihpi: Fix unlocked snd_pcm_stop() call
Takashi Iwai [Thu, 11 Jul 2013 15:55:57 +0000 (17:55 +0200)]
ALSA: asihpi: Fix unlocked snd_pcm_stop() call

commit 60478295d6876619f8f47f6d1a5c25eaade69ee3 upstream.

snd_pcm_stop() must be called in the PCM substream lock context.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoALSA: atiixp: Fix unlocked snd_pcm_stop() call
Takashi Iwai [Thu, 11 Jul 2013 15:56:56 +0000 (17:56 +0200)]
ALSA: atiixp: Fix unlocked snd_pcm_stop() call

commit cc7282b8d5abbd48c81d1465925d464d9e3eaa8f upstream.

snd_pcm_stop() must be called in the PCM substream lock context.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoALSA: pxa2xx: Fix unlocked snd_pcm_stop() call
Takashi Iwai [Thu, 11 Jul 2013 15:59:33 +0000 (17:59 +0200)]
ALSA: pxa2xx: Fix unlocked snd_pcm_stop() call

commit 46f6c1aaf790be9ea3c8ddfc8f235a5f677d08e2 upstream.

snd_pcm_stop() must be called in the PCM substream lock context.

Acked-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoALSA: ua101: Fix unlocked snd_pcm_stop() call
Takashi Iwai [Thu, 11 Jul 2013 15:58:25 +0000 (17:58 +0200)]
ALSA: ua101: Fix unlocked snd_pcm_stop() call

commit 9538aa46c2427d6782aa10036c4da4c541605e0e upstream.

snd_pcm_stop() must be called in the PCM substream lock context.

Acked-by: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoALSA: 6fire: Fix unlocked snd_pcm_stop() call
Takashi Iwai [Thu, 11 Jul 2013 15:57:55 +0000 (17:57 +0200)]
ALSA: 6fire: Fix unlocked snd_pcm_stop() call

commit 5b9ab3f7324a1b94a5a5a76d44cf92dfeb3b5e80 upstream.

snd_pcm_stop() must be called in the PCM substream lock context.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoALSA: seq-oss: Initialize MIDI clients asynchronously
Takashi Iwai [Tue, 16 Jul 2013 10:17:49 +0000 (12:17 +0200)]
ALSA: seq-oss: Initialize MIDI clients asynchronously

commit 256ca9c3ad5013ff8a8f165e5a82fab437628c8e upstream.

We've got bug reports that the module loading stuck on Debian system
with 3.10 kernel.  The debugging session revealed that the initial
registration of OSS sequencer clients stuck at module loading time,
which involves again with request_module() at the init phase.  This is
triggered only by special --install stuff Debian is using, but it's
still not good to have such loops.

As a workaround, call the registration part asynchronously.  This is a
better approach irrespective of the hang fix, in anyway.

Reported-and-tested-by: Philipp Matthias Hahn <pmhahn@pmhahn.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoALSA: hda - Add new GPU codec ID to snd-hda
Aaron Plattner [Fri, 12 Jul 2013 18:01:37 +0000 (11:01 -0700)]
ALSA: hda - Add new GPU codec ID to snd-hda

commit d52392b1a80458c0510810789c7db4a39b88022a upstream.

Vendor ID 0x10de0060 is used by a yet-to-be-named GPU chip.

Reviewed-by: Andy Ritger <aritger@nvidia.com>
Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoALSA: hda - Fix the max length of control name in generic parser
Takashi Iwai [Fri, 28 Jun 2013 09:51:32 +0000 (11:51 +0200)]
ALSA: hda - Fix the max length of control name in generic parser

commit 0c055b3413868227f2e85701c4e6938c9581f0e2 upstream.

add_control_with_pfx() in hda_generic.c assumes a shorter name string
for the control element, and this resulted in the truncation of the
long but valid string like "Headphone Surround Switch" in the middle.

This patch aligns the max size to the actual limit of snd_ctl_elem_id,
44.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoALSA: hda - Fix missing Mic Boost controls for VIA codecs
Takashi Iwai [Wed, 19 Jun 2013 05:54:09 +0000 (07:54 +0200)]
ALSA: hda - Fix missing Mic Boost controls for VIA codecs

commit d045c5dc43d829df9f067d363c3b42b14dacf434 upstream.

Some VIA codecs like VT1708S have Mic boost amps in the mic pins but
they aren't exposed in the capability bits.  In the past driver code,
we override the pin caps and create mic boost controls forcibly.
While transition to the generic parser, we lost the mic boost controls
although the pin caps are still overridden, because the generic parser
code checks the widget caps, too.

So this patch adds a new helper function to allow the override of the
given widget capability bits, and makes VIA codecs driver to add the
missing input-amp capability bit.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=59861
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoALSA: hda - Cache the MUX selection for generic HDMI
Takashi Iwai [Tue, 18 Jun 2013 14:14:22 +0000 (16:14 +0200)]
ALSA: hda - Cache the MUX selection for generic HDMI

commit bddee96b5d0db869f47b195fe48c614ca824203c upstream.

When a selection to a converter MUX is changed in hdmi_pcm_open(), it
should be cached so that the given connection can be restored properly
at PM resume.  We need just to replace the corresponding
snd_hda_codec_write() call with snd_hda_codec_write_cache().

Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoALSA: hda - Fix return value of snd_hda_check_power_state()
Takashi Iwai [Tue, 18 Jun 2013 05:55:02 +0000 (07:55 +0200)]
ALSA: hda - Fix return value of snd_hda_check_power_state()

commit 06ec56d3c60238f27bfa50d245592fccc1b4ef0f upstream.

The refactoring by commit 9040d102 introduced the new function
snd_hda_check_power_state().  This function is supposed to return true
if the state already reached to the target state, but it actually
returns false for that.  An utterly stupid typo while copy & paste.

Fortunately this didn't influence on much behavior because powering up
AFG usually powers up the child widgets, too.  But the finer power
control must have been broken by this bug.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoALSA: hda - Fix EAPD vmaster hook for AD1884 & co
Takashi Iwai [Thu, 4 Jul 2013 10:54:22 +0000 (12:54 +0200)]
ALSA: hda - Fix EAPD vmaster hook for AD1884 & co

commit 8f0b3b7e222383a21f7d58bd97d5552b3a5dbced upstream.

ad1884_fixup_hp_eapd() tries to set the NID for controlling the
speaker EAPD from the pin configuration.  But the current code can't
work expectedly since it sets spec->eapd_nid before calling the
generic parser where the autocfg pins are set up.

This patch changes the function to set spec->eapd_nid after the
generic parser call while it sets vmaster hook unconditionally.  The
spec->eapd_nid check is moved in the hook function itself instead.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoiio: inkern: fix iio_convert_raw_to_processed_unlocked
Alexandre Belloni [Mon, 1 Jul 2013 16:40:00 +0000 (17:40 +0100)]
iio: inkern: fix iio_convert_raw_to_processed_unlocked

commit f91d1b63a4e096d3023aaaafec9d9d3aff25997f upstream.

When reading IIO_CHAN_INFO_OFFSET, the return value of iio_channel_read() for
success will be IIO_VAL*, checking for 0 is not correct.

Without this fix the offset applied by iio drivers will be ignored when
converting a raw value to one in appropriate base units (e.g mV) in
a IIO client drivers that use iio_convert_raw_to_processed including
iio-hwmon.

Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoiio: Fix iio_channel_has_info
Alexandre Belloni [Mon, 1 Jul 2013 14:20:00 +0000 (15:20 +0100)]
iio: Fix iio_channel_has_info

commit 1c297a66654a3295ae87e2b7f3724d214eb2b5ec upstream.

Since the info_mask split, iio_channel_has_info() is not working correctly.
info_mask_separate and info_mask_shared_by_type, it is not possible to compare
them directly with the iio_chan_info_enum enum. Correct that bit using the BIT()
macro.

Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agoarm64: mm: don't treat user cache maintenance faults as writes
Will Deacon [Fri, 19 Jul 2013 14:37:12 +0000 (15:37 +0100)]
arm64: mm: don't treat user cache maintenance faults as writes

commit db6f41063cbdb58b14846e600e6bc3f4e4c2e888 upstream.

On arm64, cache maintenance faults appear as data aborts with the CM
bit set in the ESR. The WnR bit, usually used to distinguish between
faulting loads and stores, always reads as 1 and (slightly confusingly)
the instructions are treated as reads by the architecture.

This patch fixes our fault handling code to treat cache maintenance
faults in the same way as loads.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agocpufreq: Revert commit 2f7021a8 to fix CPU hotplug regression
Srivatsa S. Bhat [Tue, 16 Jul 2013 20:46:48 +0000 (22:46 +0200)]
cpufreq: Revert commit 2f7021a8 to fix CPU hotplug regression

commit e8d05276f236ee6435e78411f62be9714e0b9377 upstream.

commit 2f7021a8 "cpufreq: protect 'policy->cpus' from offlining
during __gov_queue_work()" caused a regression in CPU hotplug,
because it lead to a deadlock between cpufreq governor worker thread
and the CPU hotplug writer task.

Lockdep splat corresponding to this deadlock is shown below:

[   60.277396] ======================================================
[   60.277400] [ INFO: possible circular locking dependency detected ]
[   60.277407] 3.10.0-rc7-dbg-01385-g241fd04-dirty #1744 Not tainted
[   60.277411] -------------------------------------------------------
[   60.277417] bash/2225 is trying to acquire lock:
[   60.277422]  ((&(&j_cdbs->work)->work)){+.+...}, at: [<ffffffff810621b5>] flush_work+0x5/0x280
[   60.277444] but task is already holding lock:
[   60.277449]  (cpu_hotplug.lock){+.+.+.}, at: [<ffffffff81042d8b>] cpu_hotplug_begin+0x2b/0x60
[   60.277465] which lock already depends on the new lock.

[   60.277472] the existing dependency chain (in reverse order) is:
[   60.277477] -> #2 (cpu_hotplug.lock){+.+.+.}:
[   60.277490]        [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200
[   60.277503]        [<ffffffff815b6157>] mutex_lock_nested+0x67/0x410
[   60.277514]        [<ffffffff81042cbc>] get_online_cpus+0x3c/0x60
[   60.277522]        [<ffffffff814b842a>] gov_queue_work+0x2a/0xb0
[   60.277532]        [<ffffffff814b7891>] cs_dbs_timer+0xc1/0xe0
[   60.277543]        [<ffffffff8106302d>] process_one_work+0x1cd/0x6a0
[   60.277552]        [<ffffffff81063d31>] worker_thread+0x121/0x3a0
[   60.277560]        [<ffffffff8106ae2b>] kthread+0xdb/0xe0
[   60.277569]        [<ffffffff815bb96c>] ret_from_fork+0x7c/0xb0
[   60.277580] -> #1 (&j_cdbs->timer_mutex){+.+...}:
[   60.277592]        [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200
[   60.277600]        [<ffffffff815b6157>] mutex_lock_nested+0x67/0x410
[   60.277608]        [<ffffffff814b785d>] cs_dbs_timer+0x8d/0xe0
[   60.277616]        [<ffffffff8106302d>] process_one_work+0x1cd/0x6a0
[   60.277624]        [<ffffffff81063d31>] worker_thread+0x121/0x3a0
[   60.277633]        [<ffffffff8106ae2b>] kthread+0xdb/0xe0
[   60.277640]        [<ffffffff815bb96c>] ret_from_fork+0x7c/0xb0
[   60.277649] -> #0 ((&(&j_cdbs->work)->work)){+.+...}:
[   60.277661]        [<ffffffff810ab826>] __lock_acquire+0x1766/0x1d30
[   60.277669]        [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200
[   60.277677]        [<ffffffff810621ed>] flush_work+0x3d/0x280
[   60.277685]        [<ffffffff81062d8a>] __cancel_work_timer+0x8a/0x120
[   60.277693]        [<ffffffff81062e53>] cancel_delayed_work_sync+0x13/0x20
[   60.277701]        [<ffffffff814b89d9>] cpufreq_governor_dbs+0x529/0x6f0
[   60.277709]        [<ffffffff814b76a7>] cs_cpufreq_governor_dbs+0x17/0x20
[   60.277719]        [<ffffffff814b5df8>] __cpufreq_governor+0x48/0x100
[   60.277728]        [<ffffffff814b6b80>] __cpufreq_remove_dev.isra.14+0x80/0x3c0
[   60.277737]        [<ffffffff815adc0d>] cpufreq_cpu_callback+0x38/0x4c
[   60.277747]        [<ffffffff81071a4d>] notifier_call_chain+0x5d/0x110
[   60.277759]        [<ffffffff81071b0e>] __raw_notifier_call_chain+0xe/0x10
[   60.277768]        [<ffffffff815a0a68>] _cpu_down+0x88/0x330
[   60.277779]        [<ffffffff815a0d46>] cpu_down+0x36/0x50
[   60.277788]        [<ffffffff815a2748>] store_online+0x98/0xd0
[   60.277796]        [<ffffffff81452a28>] dev_attr_store+0x18/0x30
[   60.277806]        [<ffffffff811d9edb>] sysfs_write_file+0xdb/0x150
[   60.277818]        [<ffffffff8116806d>] vfs_write+0xbd/0x1f0
[   60.277826]        [<ffffffff811686fc>] SyS_write+0x4c/0xa0
[   60.277834]        [<ffffffff815bbbbe>] tracesys+0xd0/0xd5
[   60.277842] other info that might help us debug this:

[   60.277848] Chain exists of:
  (&(&j_cdbs->work)->work) --> &j_cdbs->timer_mutex --> cpu_hotplug.lock

[   60.277864]  Possible unsafe locking scenario:

[   60.277869]        CPU0                    CPU1
[   60.277873]        ----                    ----
[   60.277877]   lock(cpu_hotplug.lock);
[   60.277885]                                lock(&j_cdbs->timer_mutex);
[   60.277892]                                lock(cpu_hotplug.lock);
[   60.277900]   lock((&(&j_cdbs->work)->work));
[   60.277907]  *** DEADLOCK ***

[   60.277915] 6 locks held by bash/2225:
[   60.277919]  #0:  (sb_writers#6){.+.+.+}, at: [<ffffffff81168173>] vfs_write+0x1c3/0x1f0
[   60.277937]  #1:  (&buffer->mutex){+.+.+.}, at: [<ffffffff811d9e3c>] sysfs_write_file+0x3c/0x150
[   60.277954]  #2:  (s_active#61){.+.+.+}, at: [<ffffffff811d9ec3>] sysfs_write_file+0xc3/0x150
[   60.277972]  #3:  (x86_cpu_hotplug_driver_mutex){+.+...}, at: [<ffffffff81024cf7>] cpu_hotplug_driver_lock+0x17/0x20
[   60.277990]  #4:  (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff815a0d32>] cpu_down+0x22/0x50
[   60.278007]  #5:  (cpu_hotplug.lock){+.+.+.}, at: [<ffffffff81042d8b>] cpu_hotplug_begin+0x2b/0x60
[   60.278023] stack backtrace:
[   60.278031] CPU: 3 PID: 2225 Comm: bash Not tainted 3.10.0-rc7-dbg-01385-g241fd04-dirty #1744
[   60.278037] Hardware name: Acer             Aspire 5741G    /Aspire 5741G    , BIOS V1.20 02/08/2011
[   60.278042]  ffffffff8204e110 ffff88014df6b9f8 ffffffff815b3d90 ffff88014df6ba38
[   60.278055]  ffffffff815b0a8d ffff880150ed3f60 ffff880150ed4770 3871c4002c8980b2
[   60.278068]  ffff880150ed4748 ffff880150ed4770 ffff880150ed3f60 ffff88014df6bb00
[   60.278081] Call Trace:
[   60.278091]  [<ffffffff815b3d90>] dump_stack+0x19/0x1b
[   60.278101]  [<ffffffff815b0a8d>] print_circular_bug+0x2b6/0x2c5
[   60.278111]  [<ffffffff810ab826>] __lock_acquire+0x1766/0x1d30
[   60.278123]  [<ffffffff81067e08>] ? __kernel_text_address+0x58/0x80
[   60.278134]  [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200
[   60.278142]  [<ffffffff810621b5>] ? flush_work+0x5/0x280
[   60.278151]  [<ffffffff810621ed>] flush_work+0x3d/0x280
[   60.278159]  [<ffffffff810621b5>] ? flush_work+0x5/0x280
[   60.278169]  [<ffffffff810a9b14>] ? mark_held_locks+0x94/0x140
[   60.278178]  [<ffffffff81062d77>] ? __cancel_work_timer+0x77/0x120
[   60.278188]  [<ffffffff810a9cbd>] ? trace_hardirqs_on_caller+0xfd/0x1c0
[   60.278196]  [<ffffffff81062d8a>] __cancel_work_timer+0x8a/0x120
[   60.278206]  [<ffffffff81062e53>] cancel_delayed_work_sync+0x13/0x20
[   60.278214]  [<ffffffff814b89d9>] cpufreq_governor_dbs+0x529/0x6f0
[   60.278225]  [<ffffffff814b76a7>] cs_cpufreq_governor_dbs+0x17/0x20
[   60.278234]  [<ffffffff814b5df8>] __cpufreq_governor+0x48/0x100
[   60.278244]  [<ffffffff814b6b80>] __cpufreq_remove_dev.isra.14+0x80/0x3c0
[   60.278255]  [<ffffffff815adc0d>] cpufreq_cpu_callback+0x38/0x4c
[   60.278265]  [<ffffffff81071a4d>] notifier_call_chain+0x5d/0x110
[   60.278275]  [<ffffffff81071b0e>] __raw_notifier_call_chain+0xe/0x10
[   60.278284]  [<ffffffff815a0a68>] _cpu_down+0x88/0x330
[   60.278292]  [<ffffffff81024cf7>] ? cpu_hotplug_driver_lock+0x17/0x20
[   60.278302]  [<ffffffff815a0d46>] cpu_down+0x36/0x50
[   60.278311]  [<ffffffff815a2748>] store_online+0x98/0xd0
[   60.278320]  [<ffffffff81452a28>] dev_attr_store+0x18/0x30
[   60.278329]  [<ffffffff811d9edb>] sysfs_write_file+0xdb/0x150
[   60.278337]  [<ffffffff8116806d>] vfs_write+0xbd/0x1f0
[   60.278347]  [<ffffffff81185950>] ? fget_light+0x320/0x4b0
[   60.278355]  [<ffffffff811686fc>] SyS_write+0x4c/0xa0
[   60.278364]  [<ffffffff815bbbbe>] tracesys+0xd0/0xd5
[   60.280582] smpboot: CPU 1 is now offline

The intention of that commit was to avoid warnings during CPU
hotplug, which indicated that offline CPUs were getting IPIs from the
cpufreq governor's work items.  But the real root-cause of that
problem was commit a66b2e5 (cpufreq: Preserve sysfs files across
suspend/resume) because it totally skipped all the cpufreq callbacks
during CPU hotplug in the suspend/resume path, and hence it never
actually shut down the cpufreq governor's worker threads during CPU
offline in the suspend/resume path.

Reflecting back, the reason why we never suspected that commit as the
root-cause earlier, was that the original issue was reported with
just the halt command and nobody had brought in suspend/resume to the
equation.

The reason for _that_ in turn, as it turns out, is that earlier
halt/shutdown was being done by disabling non-boot CPUs while tasks
were frozen, just like suspend/resume....  but commit cf7df378a
(reboot: migrate shutdown/reboot to boot cpu) which came somewhere
along that very same time changed that logic: shutdown/halt no longer
takes CPUs offline.  Thus, the test-cases for reproducing the bug
were vastly different and thus we went totally off the trail.

Overall, it was one hell of a confusion with so many commits
affecting each other and also affecting the symptoms of the problems
in subtle ways.  Finally, now since the original problematic commit
(a66b2e5) has been completely reverted, revert this intermediate fix
too (2f7021a8), to fix the CPU hotplug deadlock.  Phew!

Reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Reported-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Tested-by: Peter Wu <lekensteyn@gmail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agocpufreq: Revert commit a66b2e to fix suspend/resume regression
Srivatsa S. Bhat [Thu, 11 Jul 2013 22:15:37 +0000 (03:45 +0530)]
cpufreq: Revert commit a66b2e to fix suspend/resume regression

commit aae760ed21cd690fe8a6db9f3a177ad55d7e12ab upstream.

commit a66b2e (cpufreq: Preserve sysfs files across suspend/resume)
has unfortunately caused several things in the cpufreq subsystem to
break subtly after a suspend/resume cycle.

The intention of that patch was to retain the file permissions of the
cpufreq related sysfs files across suspend/resume.  To achieve that,
the commit completely removed the calls to cpufreq_add_dev() and
__cpufreq_remove_dev() during suspend/resume transitions.  But the
problem is that those functions do 2 kinds of things:
  1. Low-level initialization/tear-down that are critical to the
     correct functioning of cpufreq-core.
  2. Kobject and sysfs related initialization/teardown.

Ideally we should have reorganized the code to cleanly separate these
two responsibilities, and skipped only the sysfs related parts during
suspend/resume.  Since we skipped the entire callbacks instead (which
also included some CPU and cpufreq-specific critical components),
cpufreq subsystem started behaving erratically after suspend/resume.

So revert the commit to fix the regression.  We'll revisit and address
the original goal of that commit separately, since it involves quite a
bit of careful code reorganization and appears to be non-trivial.

(While reverting the commit, note that another commit f51e1eb
 (cpufreq: Fix cpufreq regression after suspend/resume) already
 reverted part of the original set of changes.  So revert only the
 remaining ones).

Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agopowerpc/perf: Don't enable if we have zero events
Michael Ellerman [Fri, 28 Jun 2013 08:15:14 +0000 (18:15 +1000)]
powerpc/perf: Don't enable if we have zero events

commit 4ea355b5368bde0574c12430df53334c4be3bdcf upstream.

In power_pmu_enable() we still enable the PMU even if we have zero
events. This should have no effect but doesn't make much sense. Instead
just return after telling the hypervisor that we are not using the PMCs.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agopowerpc/perf: Use existing out label in power_pmu_enable()
Michael Ellerman [Fri, 28 Jun 2013 08:15:13 +0000 (18:15 +1000)]
powerpc/perf: Use existing out label in power_pmu_enable()

commit 0a48843d6c5114cfa4a9540ee4d6af87628cec01 upstream.

In power_pmu_enable() we can use the existing out label to reduce the
number of return paths.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agopowerpc/perf: Freeze PMC5/6 if we're not using them
Michael Ellerman [Fri, 28 Jun 2013 08:15:12 +0000 (18:15 +1000)]
powerpc/perf: Freeze PMC5/6 if we're not using them

commit 7a7a41f9d5b28ac3a916b057a7d3cd3f435ee9a6 upstream.

On Power8 we can freeze PMC5 and 6 if we're not using them. Normally they
run all the time.

As noticed by Anshuman, we should unfreeze them when we disable the PMU
as there are legacy tools which expect them to run all the time.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agopowerpc/perf: Rework disable logic in pmu_disable()
Michael Ellerman [Fri, 28 Jun 2013 08:15:11 +0000 (18:15 +1000)]
powerpc/perf: Rework disable logic in pmu_disable()

commit 378a6ee99e4a431ec84e4e61893445c041c93007 upstream.

In pmu_disable() we disable the PMU by setting the FC (Freeze Counters)
bit in MMCR0. In order to do this we have to read/modify/write MMCR0.

It's possible that we read a value from MMCR0 which has PMAO (PMU Alert
Occurred) set. When we write that value back it will cause an interrupt
to occur. We will then end up in the PMU interrupt handler even though
we are supposed to have just disabled the PMU.

We can avoid this by making sure we never write PMAO back. We should not
lose interrupts because when the PMU is re-enabled the overflowed values
will cause another interrupt.

We also reorder the clearing of SAMPLE_ENABLE so that is done after the
PMU is frozen. Otherwise there is a small window between the clearing of
SAMPLE_ENABLE and the setting of FC where we could take an interrupt and
incorrectly see SAMPLE_ENABLE not set. This would for example change the
logic in perf_read_regs().

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agopowerpc/perf: Check that events only include valid bits on Power8
Michael Ellerman [Fri, 28 Jun 2013 08:15:10 +0000 (18:15 +1000)]
powerpc/perf: Check that events only include valid bits on Power8

commit d8bec4c9cd58f6d3679e09b7293851fb92ad7557 upstream.

A mistake we have made in the past is that we pull out the fields we
need from the event code, but don't check that there are no unknown bits
set. This means that we can't ever assign meaning to those unknown bits
in future.

Although we have once again failed to do this at release, it is still
early days for Power8 so I think we can still slip this in and get away
with it.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agopowerpc/numa: Do not update sysfs cpu registration from invalid context
Nathan Fontenot [Tue, 25 Jun 2013 03:08:05 +0000 (22:08 -0500)]
powerpc/numa: Do not update sysfs cpu registration from invalid context

commit dd023217e17e72b46fb4d49c7734c426938c3dba upstream.

The topology update code that updates the cpu node registration in sysfs
should not be called while in stop_machine(). The register/unregister
calls take a lock and may sleep.

This patch moves these calls outside of the call to stop_machine().

Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agopowerpc/smp: Section mismatch from smp_release_cpus to __initdata spinning_secondaries
Chen Gang [Wed, 20 Mar 2013 06:30:12 +0000 (14:30 +0800)]
powerpc/smp: Section mismatch from smp_release_cpus to __initdata spinning_secondaries

commit 8246aca7058f3f2c2ae503081777965cd8df7b90 upstream.

the smp_release_cpus is a normal funciton and called in normal environments,
  but it calls the __initdata spinning_secondaries.
  need modify spinning_secondaries to match smp_release_cpus.

the related warning:
  (the linker report boot_paca.33377, but it should be spinning_secondaries)

-----------------------------------------------------------------------------

WARNING: arch/powerpc/kernel/built-in.o(.text+0x23176): Section mismatch in reference from the function .smp_release_cpus() to the variable .init.data:boot_paca.33377
The function .smp_release_cpus() references
the variable __initdata boot_paca.33377.
This is often because .smp_release_cpus lacks a __initdata
annotation or the annotation of boot_paca.33377 is wrong.

WARNING: arch/powerpc/kernel/built-in.o(.text+0x231fe): Section mismatch in reference from the function .smp_release_cpus() to the variable .init.data:boot_paca.33377
The function .smp_release_cpus() references
the variable __initdata boot_paca.33377.
This is often because .smp_release_cpus lacks a __initdata
annotation or the annotation of boot_paca.33377 is wrong.

-----------------------------------------------------------------------------

Signed-off-by: Chen Gang <gang.chen@asianux.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agopowerpc: Wire up the HV facility unavailable exception
Michael Ellerman [Tue, 25 Jun 2013 07:47:57 +0000 (17:47 +1000)]
powerpc: Wire up the HV facility unavailable exception

commit b14b6260efeee6eb8942c6e6420e31281892acb6 upstream.

Similar to the facility unavailble exception, except the facilities are
controlled by HFSCR.

Adapt the facility_unavailable_exception() so it can be called for
either the regular or Hypervisor facility unavailable exceptions.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agopowerpc: Rename and flesh out the facility unavailable exception handler
Michael Ellerman [Tue, 25 Jun 2013 07:47:56 +0000 (17:47 +1000)]
powerpc: Rename and flesh out the facility unavailable exception handler

commit 021424a1fce335e05807fd770eb8e1da30a63eea upstream.

The exception at 0xf60 is not the TM (Transactional Memory) unavailable
exception, it is the "Facility Unavailable Exception", rename it as
such.

Flesh out the handler to acknowledge the fact that it can be called for
many reasons, one of which is TM being unavailable.

Use STD_EXCEPTION_COMMON() for the exception body, for some reason we
had it open-coded, I've checked the generated code is identical.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
11 years agopowerpc: Remove KVMTEST from RELON exception handlers
Michael Ellerman [Tue, 25 Jun 2013 07:47:55 +0000 (17:47 +1000)]
powerpc: Remove KVMTEST from RELON exception handlers

commit c9f69518e5f08170bc857984a077f693d63171df upstream.

KVMTEST is a macro which checks whether we are taking an exception from
guest context, if so we branch out of line and eventually call into the
KVM code to handle the switch.

When running real guests on bare metal (HV KVM) the hardware ensures
that we never take a relocation on exception when transitioning from
guest to host. For PR KVM we disable relocation on exceptions ourself in
kvmppc_core_init_vm(), as of commit a413f47 "Disable relocation on
exceptions whenever PR KVM is active".

So convert all the RELON macros to use NOTEST, and drop the remaining
KVM_HANDLER() definitions we have for 0xe40 and 0xe80.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>