Alex Deucher [Thu, 15 Apr 2010 17:31:12 +0000 (13:31 -0400)]
drm/radeon/kms: fix tv dac conflict resolver
commit
08d075116db3592db218bfe0f554cd93c9e12505 upstream.
On systems with the tv dac shared between DVI and TV,
we can only use the dac for one of the connectors.
However, when using a digital monitor on the DVI port,
you can use the dac for the TV connector just fine.
Check the use_digital status when resolving the conflict.
Fixes fdo bug 27649, possibly others.
Signed-off-by: Alex Deucher <alexdeucher@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Alex Deucher [Tue, 13 Apr 2010 15:21:59 +0000 (11:21 -0400)]
drm/radeon/kms: disable the tv encoder when tv/cv is not in use
commit
d3a67a43b0460bae3e2ac14092497833344ac10d upstream.
Switching between TV and VGA caused VGA to break on some systems
since the TV encoder was left enabled when VGA was used.
fixes fdo bug 25520.
Signed-off-by: Alex Deucher <alexdeucher@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Stefan Bader [Mon, 29 Mar 2010 15:53:12 +0000 (17:53 +0200)]
drm/i915: Add no_lvds entry for the Clientron U800
commit
9875557ee8247c3f7390d378c027b45c7535a224 upstream.
BugLink: http://bugs.launchpad.net/ubuntu/bugs/544671
This system claims to have a LVDS but has not.
Signed-off-by: Stephane Graber <stgraber@ubuntu.com>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Jean Delvare [Wed, 14 Apr 2010 14:14:08 +0000 (16:14 +0200)]
hwmon: (sht15) Properly handle the case CONFIG_REGULATOR=n
commit
c7a78d2c2e2537fd24903e966f34aae50319d587 upstream.
When CONFIG_REGULATOR isn't set, regulator_get_voltage() returns 0.
Properly handle this case by not trusting the value.
Reported-by: Jerome Oufella <jerome.oufella@savoirfairelinux.com>
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Cc: Jonathan Cameron <jic23@cam.ac.uk>
Acked-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Jerome Oufella [Wed, 14 Apr 2010 14:14:07 +0000 (16:14 +0200)]
hwmon: (sht15) Fix sht15_calc_temp interpolation function
commit
328a2c22abd08911e37fa66f1358f829cecd72e9 upstream.
I discovered two issues.
First the previous sht15_calc_temp() loop did not iterate through the
temppoints array since the (data->supply_uV > temppoints[i - 1].vdd)
test is always true in this direction.
Also the two-points linear interpolation function was returning biased
values due to a stray division by 1000 which shouldn't be there.
[JD: Also change the default value for d1 from 0 to something saner.]
Signed-off-by: Jerome Oufella <jerome.oufella@savoirfairelinux.com>
Acked-by: Jonathan Cameron <jic23@cam.ac.uk>
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Takashi Iwai [Sat, 10 Apr 2010 19:27:23 +0000 (21:27 +0200)]
ALSA: usb - Fix Oops after usb-midi disconnection
commit
29aac005ff4dc8a5f50b80f4e5c4f59b21c0fb50 upstream.
usb-midi causes sometimes Oops at snd_usbmidi_output_drain() after
disconnection. This is due to the access to the endpoints which have
been already released at disconnection while the files are still alive.
This patch fixes the problem by checking disconnection state at
snd_usbmidi_output_drain() and by releasing urbs but keeping the
endpoint instances until really all freed.
Tested-by: Tvrtko Ursulin <tvrtko@ursulin.net>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Joerg Schirottke [Thu, 15 Apr 2010 06:37:41 +0000 (08:37 +0200)]
ALSA: hda - add a quirk for Clevo M570U laptop
commit
d1501ea844eefdf925f6b711875b4b2b928fddf8 upstream.
Added the matching model for Clevo laptop M570U.
Signed-off-by: Joerg Schirottke <master@kanotix.com>
Tested-by: Maximilian Gerhard <maxbox@directbox.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Trond Myklebust [Sun, 11 Apr 2010 20:48:44 +0000 (16:48 -0400)]
NFSv4: fix delegated locking
commit
0df5dd4aae211edeeeb84f7f84f6d093406d7c22 upstream.
Arnaud Giersch reports that NFSv4 locking is broken when we hold a
delegation since commit
8e469ebd6dc32cbaf620e134d79f740bf0ebab79 (NFSv4:
Don't allow posix locking against servers that don't support it).
According to Arnaud, the lock succeeds the first time he opens the file
(since we cannot do a delegated open) but then fails after we start using
delegated opens.
The following patch fixes it by ensuring that locking behaviour is
governed by a per-filesystem capability flag that is initially set, but
gets cleared if the server ever returns an OPEN without the
NFS4_OPEN_RESULT_LOCKTYPE_POSIX flag being set.
Reported-by: Arnaud Giersch <arnaud.giersch@iut-bm.univ-fcomte.fr>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Trond Myklebust [Thu, 25 Mar 2010 17:51:05 +0000 (13:51 -0400)]
NFSv4: Fall back to ordinary lookup if nfs4_atomic_open() returns EISDIR
commit
80e60639f1b7c121a7fea53920c5a4b94009361a upstream.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Anton Blanchard [Tue, 6 Apr 2010 07:02:19 +0000 (17:02 +1000)]
sched: Fix sched_getaffinity()
commit
84fba5ec91f11c0efb27d0ed6098f7447491f0df upstream.
taskset on 2.6.34-rc3 fails on one of my ppc64 test boxes with
the following error:
sched_getaffinity(0, 16, 0x10029650030) = -1 EINVAL (Invalid argument)
This box has 128 threads and 16 bytes is enough to cover it.
Commit
cd3d8031eb4311e516329aee03c79a08333141f1 (sched:
sched_getaffinity(): Allow less than NR_CPUS length) is
comparing this 16 bytes agains nr_cpu_ids.
Fix it by comparing nr_cpu_ids to the number of bits in the
cpumask we pass in.
Signed-off-by: Anton Blanchard <anton@samba.org>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Sharyathi Nagesh <sharyath@in.ibm.com>
Cc: Ulrich Drepper <drepper@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jack Steiner <steiner@sgi.com>
Cc: Russ Anderson <rja@sgi.com>
Cc: Mike Travis <travis@sgi.com>
LKML-Reference: <
20100406070218.GM5594@kryten>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
KOSAKI Motohiro [Fri, 12 Mar 2010 07:15:36 +0000 (16:15 +0900)]
sched: sched_getaffinity(): Allow less than NR_CPUS length
commit
cd3d8031eb4311e516329aee03c79a08333141f1 upstream.
[ Note, this commit changes the syscall ABI for > 1024 CPUs systems. ]
Recently, some distro decided to use NR_CPUS=4096 for mysterious reasons.
Unfortunately, glibc sched interface has the following definition:
# define __CPU_SETSIZE 1024
# define __NCPUBITS (8 * sizeof (__cpu_mask))
typedef unsigned long int __cpu_mask;
typedef struct
{
__cpu_mask __bits[__CPU_SETSIZE / __NCPUBITS];
} cpu_set_t;
It mean, if NR_CPUS is bigger than 1024, cpu_set_t makes an
ABI issue ...
More recently, Sharyathi Nagesh reported following test program makes
misterious syscall failure:
-----------------------------------------------------------------------
#define _GNU_SOURCE
#include<stdio.h>
#include<errno.h>
#include<sched.h>
int main()
{
cpu_set_t set;
if (sched_getaffinity(0, sizeof(cpu_set_t), &set) < 0)
printf("\n Call is failing with:%d", errno);
}
-----------------------------------------------------------------------
Because the kernel assumes len argument of sched_getaffinity() is bigger
than NR_CPUS. But now it is not correct.
Now we are faced with the following annoying dilemma, due to
the limitations of the glibc interface built in years ago:
(1) if we change glibc's __CPU_SETSIZE definition, we lost
binary compatibility of _all_ application.
(2) if we don't change it, we also lost binary compatibility of
Sharyathi's use case.
Then, I would propse to change the rule of the len argument of
sched_getaffinity().
Old:
len should be bigger than NR_CPUS
New:
len should be bigger than maximum possible cpu id
This creates the following behavior:
(A) In the real 4096 cpus machine, the above test program still
return -EINVAL.
(B) NR_CPUS=4096 but the machine have less than 1024 cpus (almost
all machines in the world), the above can run successfully.
Fortunatelly, BIG SGI machine is mainly used for HPC use case. It means
they can rebuild their programs.
IOW we hope they are not annoyed by this issue ...
Reported-by: Sharyathi Nagesh <sharyath@in.ibm.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Ulrich Drepper <drepper@redhat.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jack Steiner <steiner@sgi.com>
Cc: Russ Anderson <rja@sgi.com>
Cc: Mike Travis <travis@sgi.com>
LKML-Reference: <
20100312161316.9520.
A69D9226@jp.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Suresh Siddha [Thu, 1 Apr 2010 01:04:47 +0000 (18:04 -0700)]
x86: Fix double enable_IR_x2apic() call on SMP kernel on !SMP boards
commit
472a474c6630efd195d3738339fd1bdc8aa3b1aa upstream.
Jan Grossmann reported kernel boot panic while booting SMP
kernel on his system with a single core cpu. SMP kernels call
enable_IR_x2apic() from native_smp_prepare_cpus() and on
platforms where the kernel doesn't find SMP configuration we
ended up again calling enable_IR_x2apic() from the
APIC_init_uniprocessor() call in the smp_sanity_check(). Thus
leading to kernel panic.
Don't call enable_IR_x2apic() and default_setup_apic_routing()
from APIC_init_uniprocessor() in CONFIG_SMP case.
NOTE: this kind of non-idempotent and assymetric initialization
sequence is rather fragile and unclean, we'll clean that up
in v2.6.35. This is the minimal fix for v2.6.34.
Reported-by: Jan.Grossmann@kielnet.net
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: <jbarnes@virtuousgeek.org>
Cc: <david.woodhouse@intel.com>
Cc: <weidong.han@intel.com>
Cc: <youquan.song@intel.com>
Cc: <Jan.Grossmann@kielnet.net>
LKML-Reference: <
1270083887.7835.78.camel@sbs-t61.sc.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Pallipadi, Venkatesh [Thu, 25 Feb 2010 18:53:48 +0000 (10:53 -0800)]
x86, hpet: Erratum workaround for read after write of HPET comparator
commit
8da854cb02156c90028233ae1e85ce46a1d3f82c upstream.
On Wed, Feb 24, 2010 at 03:37:04PM -0800, Justin Piszcz wrote:
> Hello,
>
> Again, on the Intel DP55KG board:
>
> # uname -a
> Linux host 2.6.33 #1 SMP Wed Feb 24 18:31:00 EST 2010 x86_64 GNU/Linux
>
> [ 1.237600] ------------[ cut here ]------------
> [ 1.237890] WARNING: at arch/x86/kernel/hpet.c:404 hpet_next_event+0x70/0x80()
> [ 1.238221] Hardware name:
> [ 1.238504] hpet: compare register read back failed.
> [ 1.238793] Modules linked in:
> [ 1.239315] Pid: 0, comm: swapper Not tainted 2.6.33 #1
> [ 1.239605] Call Trace:
> [ 1.239886] <IRQ> [<
ffffffff81056c13>] ? warn_slowpath_common+0x73/0xb0
> [ 1.240409] [<
ffffffff81079608>] ? tick_dev_program_event+0x38/0xc0
> [ 1.240699] [<
ffffffff81056cb0>] ? warn_slowpath_fmt+0x40/0x50
> [ 1.240992] [<
ffffffff81079608>] ? tick_dev_program_event+0x38/0xc0
> [ 1.241281] [<
ffffffff81041ad0>] ? hpet_next_event+0x70/0x80
> [ 1.241573] [<
ffffffff81079608>] ? tick_dev_program_event+0x38/0xc0
> [ 1.241859] [<
ffffffff81078e32>] ? tick_handle_oneshot_broadcast+0xe2/0x100
> [ 1.246533] [<
ffffffff8102a67a>] ? timer_interrupt+0x1a/0x30
> [ 1.246826] [<
ffffffff81085499>] ? handle_IRQ_event+0x39/0xd0
> [ 1.247118] [<
ffffffff81087368>] ? handle_edge_irq+0xb8/0x160
> [ 1.247407] [<
ffffffff81029f55>] ? handle_irq+0x15/0x20
> [ 1.247689] [<
ffffffff810294a2>] ? do_IRQ+0x62/0xe0
> [ 1.247976] [<
ffffffff8146be53>] ? ret_from_intr+0x0/0xa
> [ 1.248262] <EOI> [<
ffffffff8102f277>] ? mwait_idle+0x57/0x80
> [ 1.248796] [<
ffffffff8102645c>] ? cpu_idle+0x5c/0xb0
> [ 1.249080] ---[ end trace
db7f668fb6fef4e1 ]---
>
> Is this something Intel has to fix or is it a bug in the kernel?
This is a chipset erratum.
Thomas: You mentioned we can retain this check only for known-buggy and
hpet debug kind of options. But here is the simple workaround patch for
this particular erratum.
Some chipsets have a erratum due to which read immediately following a
write of HPET comparator returns old comparator value instead of most
recently written value.
Erratum 15 in
"Intel I/O Controller Hub 9 (ICH9) Family Specification Update"
(http://www.intel.com/assets/pdf/specupdate/316973.pdf)
Workaround for the errata is to read the comparator twice if the first
one fails.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
LKML-Reference: <
20100225185348.GA9674@linux-os.sc.intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Cc: Venkatesh Pallipadi <venkatesh.pallipadi@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Thomas Gleixner [Fri, 27 Nov 2009 14:24:44 +0000 (15:24 +0100)]
x86: hpet: Make WARN_ON understandable
commit
18ed61da985c57eea3fe8038b13fa2837c9b3c3f upstream.
Andrew complained rightly that the WARN_ON in hpet_next_event() is
confusing and the code comment not really helpful.
Change it to WARN_ONCE and print the reason in clear text. Change the
comment to explain what kind of hardware wreckage we deal with.
Pointed-out-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Venki Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Shaohua Li [Fri, 5 Mar 2010 00:59:32 +0000 (08:59 +0800)]
x86-32, resume: do a global tlb flush in S4 resume
commit
8ae06d223f8203c72104e5c0c4ee49a000aedb42 upstream.
Colin King reported a strange oops in S4 resume code path (see below). The test
system has i5/i7 CPU. The kernel doesn't open PAE, so 4M page table is used.
The oops always happen a virtual address 0xc03ff000, which is mapped to the
last 4k of first 4M memory. Doing a global tlb flush fixes the issue.
EIP: 0060:[<
c0493a01>] EFLAGS:
00010086 CPU: 0
EIP is at copy_loop+0xe/0x15
EAX:
36aeb000 EBX:
00000000 ECX:
00000400 EDX:
f55ad46c
ESI:
0f800000 EDI:
c03ff000 EBP:
f67fbec4 ESP:
f67fbea8
DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
...
...
CR2:
00000000c03ff000
Tested-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
LKML-Reference: <
20100305005932.GA22675@sli10-desk.sh.intel.com>
Acked-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Alex Deucher [Tue, 6 Apr 2010 03:57:52 +0000 (23:57 -0400)]
drm/radeon/kms: fix washed out image on legacy tv dac
commit
643acacf02679befd0f98ac3c5fecb805f1c9548 upstream.
bad cast was overwriting the tvdac adj values
Fixes fdo bug 27478
Signed-off-by: Alex Deucher <alexdeucher@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Michel Dänzer [Fri, 2 Apr 2010 16:59:06 +0000 (16:59 +0000)]
drm/radeon: R300 AD only has one quad pipe.
commit
57b54ea6b7863ccfeb41851b5f58f9fd1b83c79e upstream.
Gleaned from the Mesa code.
Fixes https://bugs.freedesktop.org/show_bug.cgi?id=27355 .
Signed-off-by: Michel Dänzer <daenzer@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Adam Jackson [Tue, 6 Apr 2010 16:11:00 +0000 (16:11 +0000)]
drm/edid/quirks: Envision EN2028
commit
ba1163de2f74d624e7b0e530c4104c98ede0045a upstream.
Claims 1280x1024 preferred, physically 1600x1200
cf. http://bugzilla.redhat.com/530399
Signed-off-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Rabin Vincent [Wed, 7 Apr 2010 17:10:20 +0000 (18:10 +0100)]
ARM: 6031/1: fix Thumb-2 decompressor
commit
d4d9959c099751158c5cf14813fe378e206339c6 upstream.
98e12b5a6e05413 ("ARM: Fix decompressor's kernel size estimation for
ROM=y") broke the Thumb-2 decompressor because it added an entry in the
LC0 table but didn't adjust the offset the Thumb-2 code uses to load the
SP from that table. Fix it.
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Wey-Yi Guy [Thu, 8 Apr 2010 20:17:37 +0000 (13:17 -0700)]
iwlwifi: need check for valid qos packet before free
commit
ece6444c2fe80dab679beb5f0d58b091f1933b00 upstream.
For 4965, need to check it is valid qos frame before free, only valid
QoS frame has the tid used to free the packets.
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Steve French [Sat, 3 Apr 2010 17:20:21 +0000 (17:20 +0000)]
CIFS: initialize nbytes at the beginning of CIFSSMBWrite()
commit
a24e2d7d8f512340991ef0a59cb5d08d491b8e98 upstream.
By doing this we always overwrite nbytes value that is being passed on to
CIFSSMBWrite() and need not rely on the callers to initialize. CIFSSMBWrite2 is
doing this already.
Reviewed-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com>
Reviewed-by: Jeff Layton <jlayton@samba.org>
Signed-off-by: Suresh Jayaraman <sjayaraman@suse.de>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Suresh Jayaraman [Wed, 31 Mar 2010 06:30:03 +0000 (12:00 +0530)]
cifs: Fix a kernel BUG with remote OS/2 server (try #3)
commit
6513a81e9325d712f1bfb9a1d7b750134e49ff18 upstream.
While chasing a bug report involving a OS/2 server, I noticed the server sets
pSMBr->CountHigh to a incorrect value even in case of normal writes. This
results in 'nbytes' being computed wrongly and triggers a kernel BUG at
mm/filemap.c.
void iov_iter_advance(struct iov_iter *i, size_t bytes)
{
BUG_ON(i->count < bytes); <--- BUG here
Why the server is setting 'CountHigh' is not clear but only does so after
writing 64k bytes. Though this looks like the server bug, the client side
crash may not be acceptable.
The workaround is to mask off high 16 bits if the number of bytes written as
returned by the server is greater than the bytes requested by the client as
suggested by Jeff Layton.
Reviewed-by: Jeff Layton <jlayton@samba.org>
Signed-off-by: Suresh Jayaraman <sjayaraman@suse.de>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Tejun Heo [Mon, 5 Apr 2010 01:51:26 +0000 (10:51 +0900)]
libata: disable NCQ on Crucial C300 SSD
commit
68b0ddb289220b6d4d865be128939663be34959d upstream.
Crucial said,
Thank you for contacting us. We know that with our M225 line of SSDs
you sometimes need to disable NCQ (native command queuing) to avoid
just the type of errors you're seeing. Our recommendation for the
M225 is to add libata.force=noncq to your Linux kernel boot options,
under the kernel ATA library option.
I have sent your feedback to the engineers working on the C300, and
asked them to please pass it on to the firmware team. I have been
notified that they are in the process of testing and finalizing a
new firmware version, that you can expect to see released around the
end of April. We’ll keep you posted as to when it will be available
for download.
So, turn off NCQ on the drive w/ the current firmware revision.
Reported in the following bug.
https://bugzilla.kernel.org/show_bug.cgi?id=15573
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: lethalwp@scarlet.be
Reported-by: Luke Macken <lmacken@redhat.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Alan Jenkins [Sat, 20 Feb 2010 11:02:24 +0000 (11:02 +0000)]
eeepc-laptop: disable wireless hotplug for 1005PE
commit
ced69c59811f05b2f8378467cbb82ac6ed3c6a5a upstream.
The wireless hotplug code is not needed on this model, and it disables
the wired ethernet card. (Like on the 1005HA and 1201N).
References: <http://lists.alioth.debian.org/pipermail/debian-eeepc-devel/2010-February/003281.html>
[bwh: Backported to 2.6.32]
Signed-off-by: Alan Jenkins <alan-jenkins@tuffmail.co.uk>
Reported-by: Ansgar Burchardt <ansgar@43-1.org>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Corentin Chary [Wed, 6 Jan 2010 21:07:41 +0000 (22:07 +0100)]
eeepc-laptop: disable wireless hotplug for 1201N
commit
4194e2f551a6308e6ab34ac88210bf54858aa7df upstream.
[bwh: Backported to 2.6.32]
Signed-off-by: Corentin Chary <corentincj@iksaif.net>
Signed-off-by: Len Brown <len.brown@intel.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Corentin Chary [Wed, 6 Jan 2010 21:07:40 +0000 (22:07 +0100)]
eeepc-laptop: add hotplug_disable parameter
commit
322a1356be96bcc4b97e8e370f6468c821330077 upstream.
Some new models need to disable wireless hotplug.
For the moment, we don't know excactly what models need that,
except 1005HA.
Users will be able to use that param as a workaround.
[bwh: Backported to 2.6.32]
Signed-off-by: Corentin Chary <corentincj@iksaif.net>
Signed-off-by: Len Brown <len.brown@intel.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Corentin Chary [Wed, 6 Jan 2010 21:07:38 +0000 (22:07 +0100)]
eeepc-laptop: dmi blacklist to disable pci hotplug code
commit
10ae4b5663ff3092553bfbd867e7bd474ce6c553 upstream.
This is a short term workaround for Eeepc 1005HA.
refs: <http://bugzilla.kernel.org/show_bug.cgi?id=14570>
[bwh: Backported to 2.6.32]
Signed-off-by: Corentin Chary <corentincj@iksaif.net>
Signed-off-by: Len Brown <len.brown@intel.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Alan Jenkins [Wed, 6 Jan 2010 21:07:37 +0000 (22:07 +0100)]
eeepc-laptop: disable cpu speed control on EeePC 701
commit
da8ba01deb98f3dc0558b1f5a37e64f40bba7904 upstream.
The EeePC 4G ("701") implements CFVS, but it is not supported by the
pre-installed OS, and the original option to change it in the BIOS
setup screen was removed in later versions. Judging by the lack of
"Super Hybrid Engine" on Asus product pages, this applies to all "701"
models (4G/4G Surf/2G Surf).
So Asus made a deliberate decision not to support it on this model.
We have several reports that using it can cause the system to hang [1].
That said, it does not happen all the time. Some users do not
experience it at all (and apparently wish to continue "right-clocking").
Check for the EeePC 701 using DMI. If met, then disable writes to the
"cpufv" sysfs attribute and log an explanatory message.
Add a "cpufv_disabled" attribute which allow users to override this
policy. Writing to this attribute will log a second message.
The sysfs attribute is more useful than a module option, because it
makes it easier for userspace scripts to provide consistent behaviour
(according to user configuration), regardless of whether the kernel
includes this change.
[1] <http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=559578>
[bwh: Backported to 2.6.32]
Signed-off-by: Alan Jenkins <alan-jenkins@tuffmail.co.uk>
Signed-off-by: Corentin Chary <corentincj@iksaif.net>
Signed-off-by: Len Brown <len.brown@intel.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Henrique de Moraes Holschuh [Fri, 26 Feb 2010 01:22:22 +0000 (22:22 -0300)]
thinkpad-acpi: lock down video output state access
commit
b525c06cdbd8a3963f0173ccd23f9147d4c384b5 upstream.
Given the right combination of ThinkPad and X.org, just reading the
video output control state is enough to hard-crash X.org.
Until the day I somehow find out a model or BIOS cut date to not
provide this feature to ThinkPads that can do video switching through
X RandR, change permissions so that only processes with CAP_SYS_ADMIN
can access any sort of video output control state.
This bug could be considered a local DoS I suppose, as it allows any
non-privledged local user to cause some versions of X.org to
hard-crash some ThinkPads.
Reported-by: Jidanni <jidanni@jidanni.org>
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Alexey Dobriyan [Tue, 15 Dec 2009 23:51:12 +0000 (21:51 -0200)]
thinkpad-acpi: convert to seq_file
commit
887965e6576a78f71b9b98dec43fd1c73becd2e8 upstream.
(hmh@hmh.eng.br: Updated to apply to 2.6.32.y)
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Henrique de Moraes Holschuh [Tue, 15 Dec 2009 23:51:07 +0000 (21:51 -0200)]
thinkpad-acpi: log initial state of rfkill switches
commit
5451a923bbdcff6ae665947e120af7238b21a9d2 upstream.
We already log the initial state of the hardware rfkill switch (WLSW),
might as well log the state of the softswitches as well.
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Cc: Josip Rodin <joy+kernel@entuzijast.net>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Henrique de Moraes Holschuh [Tue, 15 Dec 2009 23:51:06 +0000 (21:51 -0200)]
thinkpad-acpi: sync input device EV_SW initial state
commit
d89a727aff649f6768f7a34ee57f031ebf8bab4c upstream.
Before we register the input device, sync the input layer EV_SW state
through a call to input_report_switch(), to avoid issuing a gratuitous
event for the initial state of these switches.
This fixes some annoyances caused by the interaction with rfkill and
EV_SW SW_RFKILL_ALL events.
Reported-by: Kevin Locke <kevin@kevinlocke.name>
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Cc: Alan Jenkins <alan-jenkins@tuffmail.co.uk>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Henrique de Moraes Holschuh [Wed, 9 Dec 2009 01:36:29 +0000 (01:36 +0000)]
thinkpad-acpi: use input_set_capability
commit
792979c8032b8f5adb77ea986db7082fff04c8e7 upstream.
Use input_set_capability() instead of set_bit.
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Cc: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Henrique de Moraes Holschuh [Wed, 9 Dec 2009 01:36:28 +0000 (01:36 +0000)]
thinkpad-acpi: log temperatures on termal alarm (v2)
commit
9ebd9e833648745fa5ac6998b9e0153ccd3ba839 upstream.
Log temperatures on any of the EC thermal alarms. It could be
useful to help tracking down what is happening...
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Henrique de Moraes Holschuh [Wed, 9 Dec 2009 01:36:27 +0000 (01:36 +0000)]
thinkpad-acpi: expose module parameters
commit
b09c72259e88cec3d602aef987a3209297f3a9c2 upstream.
Export the normal (non-command) module paramenters as mode 0444, so
that they will show up in sysfs.
These parameters must not be changed at runtime as a rule, with very
few exceptions.
Reported-by: Ferenc Wagner <wferi@niif.hu>
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Henrique de Moraes Holschuh [Wed, 9 Dec 2009 01:36:26 +0000 (01:36 +0000)]
thinkpad-acpi: adopt input device
commit
d112ef95d4ec1ee7fe7123e3f21e4aac0d57570c upstream.
Properly init the parent field of the input device. Thanks to Alan
Jenkins, who noted this problem in a different driver.
Reported-by: Alan Jenkins <alan-jenkins@tuffmail.co.uk>
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Henrique de Moraes Holschuh [Wed, 9 Dec 2009 01:36:25 +0000 (01:36 +0000)]
thinkpad-acpi: silence bogus complain during rmmod
commit
6b30eb7d211840ba1a03f855d9e7b80a921368f2 upstream.
Fix this bogus warning during module shutdown, when
backlight event reporting is enabled:
"thinkpad_acpi: required events 0x00018000 not enabled!"
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Henrique de Moraes Holschuh [Wed, 9 Dec 2009 01:36:24 +0000 (01:36 +0000)]
thinkpad-acpi: issue backlight class events
commit
347a26860e2293b1347996876d3550499c7bb31f upstream.
Take advantage of the new events capabilities of the backlight class to
notify userspace of backlight changes.
This depends on "backlight: Allow drivers to update the core, and
generate events on changes", by Matthew Garrett.
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Cc: Matthew Garrett <mjg@redhat.com>
Cc: Richard Purdie <rpurdie@linux.intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Henrique de Moraes Holschuh [Wed, 9 Dec 2009 01:36:23 +0000 (01:36 +0000)]
thinkpad-acpi: fix some version quirks
commit
90765c6aee568137521ba19347c744b5abde8161 upstream.
Update some of the BIOS/EC version quirks.
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Eric Sandeen [Mon, 16 Nov 2009 22:27:30 +0000 (16:27 -0600)]
ext3: journal all modifications in ext3_xattr_set_handle
commit
d965736b8cb42ae51ba9c3f13488035a98d025c6 upstream.
ext3_xattr_set_handle() was zeroing out an inode outside
of journaling constraints; this is one of the accesses that
was causing the crc errors in journal replay as seen in
kernel.org bugzilla #14354.
Although ext3 doesn't have the crc issue, modifications
out of journal control are a Bad Thing.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: maximilian attems <max@stro.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Eric Sandeen [Mon, 16 Nov 2009 22:34:51 +0000 (16:34 -0600)]
ext3: Don't update the superblock in ext3_statfs()
commit
b918397542388de75bd86c32fbfa820e5d629fa9 upstream.
commit
a71ce8c6c9bf269b192f352ea555217815cf027e updated ext3_statfs()
to update the on-disk superblock counters, but modified this buffer
directly without any journaling of the change. This is one of the
accesses that was causing the crc errors in journal replay as seen in
kernel.org bugzilla #14354.
The modifications were originally to keep the sb "more" in sync,
so that a readonly fsck of the device didn't flag this as an
error (as often), but apparently e2fsprogs deals with this differently
now, anyway.
Based on Ted's patch for ext4, which was in turn based on my
work on that bug and another preliminary patch...
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: maximilian attems <max@stro.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
JosephChan@via.com.tw [Fri, 19 Mar 2010 06:08:11 +0000 (14:08 +0800)]
pata_via: Add VIA VX900 support
commit
4f1deba435ef75380c1d06fda860c7a15ea16fdf upstream.
Signed-off-by: Joseph Chan <josephchan@via.com.tw>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Marcelo Tosatti [Fri, 19 Mar 2010 14:47:39 +0000 (15:47 +0100)]
KVM: x86: disable paravirt mmu reporting
commit
a68a6a7282373bedba8a2ed751b6384edb983a64 upstream
Disable paravirt MMU capability reporting, so that new (or rebooted)
guests switch to native operation.
Paravirt MMU is a burden to maintain and does not bring significant
advantages compared to shadow anymore.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Sheng Yang [Fri, 19 Mar 2010 14:47:38 +0000 (15:47 +0100)]
KVM: VMX: Disable unrestricted guest when EPT disabled
commit
046d87103addc117f0d397196e85189722d4d7de upstream
Otherwise would cause VMEntry failure when using ept=0 on unrestricted guest
supported processors.
Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Eduardo Habkost [Fri, 19 Mar 2010 14:47:37 +0000 (15:47 +0100)]
KVM: SVM: Reset cr0 properly on vcpu reset
commit
18fa000ae453767b59ab97477925895a3f0c46ea upstream
svm_vcpu_reset() was not properly resetting the contents of the guest-visible
cr0 register, causing the following issue:
https://bugzilla.redhat.com/show_bug.cgi?id=525699
Without resetting cr0 properly, the vcpu was running the SIPI bootstrap routine
with paging enabled, making the vcpu get a pagefault exception while trying to
run it.
Instead of setting vmcb->save.cr0 directly, the new code just resets
kvm->arch.cr0 and calls kvm_set_cr0(). The bits that were set/cleared on
vmcb->save.cr0 (PG, WP, !CD, !NW) will be set properly by svm_set_cr0().
kvm_set_cr0() is used instead of calling svm_set_cr0() directly to make sure
kvm_mmu_reset_context() is called to reset the mmu to nonpaging mode.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Eduardo Habkost [Fri, 19 Mar 2010 14:47:36 +0000 (15:47 +0100)]
KVM: VMX: Use macros instead of hex value on cr0 initialization
commit
fa40052ca04bdbbeb20b839cc8ffe9fa7beefbe9 upstream
This should have no effect, it is just to make the code clearer.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Jan Kiszka [Fri, 19 Mar 2010 14:47:35 +0000 (15:47 +0100)]
KVM: VMX: Update instruction length on intercepted BP
commit
c573cd22939e54fc1b8e672054a505048987a7cb upstream
We intercept #BP while in guest debugging mode. As VM exits due to
intercepted exceptions do not necessarily come with valid
idt_vectoring, we have to update event_exit_inst_len explicitly in such
cases. At least in the absence of migration, this ensures that
re-injections of #BP will find and use the correct instruction length.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Gleb Natapov [Fri, 19 Mar 2010 14:47:34 +0000 (15:47 +0100)]
KVM: Fix segment descriptor loading
commit
c697518a861e6c43b92b848895f9926580ee63c3 upstream
Add proper error and permission checking. This patch also change task
switching code to load segment selectors before segment descriptors, like
SDM requires, otherwise permission checking during segment descriptor
loading will be incorrect.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Gleb Natapov [Fri, 19 Mar 2010 14:47:33 +0000 (15:47 +0100)]
KVM: x86 emulator: Fix popf emulation
commit
d4c6a1549c056f1d817e8f6f2f97d8b44933472f upstream
POPF behaves differently depending on current CPU mode. Emulate correct
logic to prevent guest from changing flags that it can't change otherwise.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Gleb Natapov [Fri, 19 Mar 2010 14:47:32 +0000 (15:47 +0100)]
KVM: x86 emulator: Check IOPL level during io instruction emulation
commit
f850e2e603bf5a05b0aee7901857cf85715aa694 upstream
Make emulator check that vcpu is allowed to execute IN, INS, OUT,
OUTS, CLI, STI.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Gleb Natapov [Fri, 19 Mar 2010 14:47:31 +0000 (15:47 +0100)]
KVM: x86 emulator: fix memory access during x86 emulation
commit
1871c6020d7308afb99127bba51f04548e7ca84e upstream
Currently when x86 emulator needs to access memory, page walk is done with
broadest permission possible, so if emulated instruction was executed
by userspace process it can still access kernel memory. Fix that by
providing correct memory access to page walker during emulation.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Gleb Natapov [Fri, 19 Mar 2010 14:47:30 +0000 (15:47 +0100)]
KVM: x86 emulator: Add Virtual-8086 mode of emulation
commit
a0044755679f3e761b8b95995e5f2db2b7efd0f6 upstream
For some instructions CPU behaves differently for real-mode and
virtual 8086. Let emulator know which mode cpu is in, so it will
not poke into vcpu state directly.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Evan McClain [Wed, 10 Mar 2010 00:20:58 +0000 (19:20 -0500)]
backlight: mbp_nvidia_bl - add five more MacBook variants
commit
36bc5ee6a8d13333980fa54e97d3469d3d4cda98 upstream.
This adds the MacBook 1,1 2,1 3,1 4,1 and 4,2 to the DMI tables.
Signed-off-by: Evan McClain <evan.mcclain@gatech.edu>
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Jiri Slaby [Fri, 19 Mar 2010 01:51:56 +0000 (02:51 +0100)]
resource: move kernel function inside __KERNEL__
commit
96d07d211739fd2450ac54e81d00fa40fcd4b1bd upstream
From: Jiri Slaby <jslaby@suse.cz>
resource: move kernel function inside __KERNEL__
It is an internal function. Move it inside __KERNEL__ ifdef, along
with task_struct declaration.
Then we get:
#--- /usr/include/linux/resource.h 2009-09-14 15:09:29.
000000000 +0200
#+++ usr/include/linux/resource.h 2010-01-04 11:30:54.
000000000 +0100
#@@ -3,8 +3,6 @@
#
##include <linux/time.h>
#
#-struct task_struct;
#-
#/*
#* Resource control/accounting header file for linux
#*/
#@@ -70,6 +68,5 @@
#*/
##include <asm/resource.h>
#
#-int getrusage(struct task_struct *p, int who, struct rusage *ru);
#
##endif
#
#***********
include/linux/Kbuild is untouched, since unifdef is run even on
headers-y nowadays.
backport to 2.6.32 by maximilian attems <max@stro.at>
Patch commented out by gregkh due to quilt complaining.
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: maximilian attems <max@stro.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Andreas Herrmann [Wed, 16 Dec 2009 14:43:55 +0000 (15:43 +0100)]
x86, amd: Get multi-node CPU info from NodeId MSR instead of PCI config space
commit
9d260ebc09a0ad6b5c73e17676df42c7bc75ff64 upstream.
Use NodeId MSR to get NodeId and number of nodes per processor.
Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com>
LKML-Reference: <
20091216144355.GB28798@alberich.amd.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Daniel T Chen [Tue, 30 Mar 2010 17:29:28 +0000 (13:29 -0400)]
ALSA: hda: Fix 0 dB offset for Lenovo Thinkpad models using AD1981
commit
b8e80cf386419453678b01bef830f53445ebb15d upstream.
BugLink: https://launchpad.net/bugs/551606
The OR's hardware distorts at PCM 100% because it does not correspond to
0 dB. Fix this in patch_ad1981() for all models using the Thinkpad
quirk.
Reported-by: Jane Silber
Signed-off-by: Daniel T Chen <crimsun@ubuntu.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dan Carpenter [Tue, 6 Apr 2010 16:31:26 +0000 (19:31 +0300)]
ALSA: mixart: range checking proc file
commit
b0cc58a25d04160d39a80e436847eaa2fbc5aa09 upstream.
The original code doesn't take into consideration that the value of
MIXART_BA0_SIZE - pos can be less than zero which would lead to a large
unsigned value for "count".
Also I moved the check that read size is a multiple of 4 bytes below
the code that adjusts "count".
Signed-off-by: Dan Carpenter <error27@gmail.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Wu Fengguang [Tue, 6 Apr 2010 21:34:53 +0000 (14:34 -0700)]
readahead: fix NULL filp dereference
commit
70655c06bd3f25111312d63985888112aed15ac5 upstream.
btrfs relocate_file_extent_cluster() calls us with NULL filp:
[ 4005.426805] BUG: unable to handle kernel NULL pointer dereference at
00000021
[ 4005.426818] IP: [<
c109a130>] page_cache_sync_readahead+0x18/0x3e
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Cc: Yan Zheng <yanzheng@21cn.com>
Reported-by: Kirill A. Shutemov <kirill@shutemov.name>
Tested-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Anton Blanchard [Tue, 6 Apr 2010 21:34:58 +0000 (14:34 -0700)]
raw: fsync method is now required
commit
55ab3a1ff843e3f0e24d2da44e71bffa5d853010 upstream.
Commit
148f948ba877f4d3cdef036b1ff6d9f68986706a (vfs: Introduce new
helpers for syncing after writing to O_SYNC file or IS_SYNC inode) broke
the raw driver.
We now call through generic_file_aio_write -> generic_write_sync ->
vfs_fsync_range. vfs_fsync_range has:
if (!fop || !fop->fsync) {
ret = -EINVAL;
goto out;
}
But drivers/char/raw.c doesn't set an fsync method.
We have two options: fix it or remove the raw driver completely. I'm
happy to do either, the fact this has been broken for so long suggests it
is rarely used.
The patch below adds an fsync method to the raw driver. My knowledge of
the block layer is pretty sketchy so this could do with a once over.
If we instead decide to remove the raw driver, this patch might still be
useful as a backport to 2.6.33 and 2.6.32.
Signed-off-by: Anton Blanchard <anton@samba.org>
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jens Axboe <jens.axboe@oracle.com>
Reviewed-by: Jeff Moyer <jmoyer@redhat.com>
Tested-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Jiri Kosina [Tue, 23 Mar 2010 15:32:37 +0000 (16:32 +0100)]
HID: fix oops in gyration_event()
commit
d8e4ebf8b603bdcd091540e6b5bddf0dec10d516 upstream.
Fix oops caused by dereferencing field->hidinput in cases where
the device hasn't been claimed by hid-input.
Reported-by: Andreas Demmer <mail@andreas-demmer.de>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Alan Cox [Mon, 30 Nov 2009 13:23:05 +0000 (13:23 +0000)]
pata_ali: Fix regression with old devices
commit
d6250a03fa736c1bff4df4601f5af2dc21f2bf9e upstream.
Making the new stuff work broke some of the old chipsets. We need to go
back to the old set up values for these it seems. Unfortunately even with
documentation this is basically a mix of cargoculting and guesswork.
Chased down to the exact line by Gianluca.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Cc: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Éric Piel [Tue, 15 Dec 2009 02:01:40 +0000 (18:01 -0800)]
lis3: fix show rate for 8 bits chips
commit
4b5d95b3809bcd77599122494aa3f575cd6ab1b9 upstream.
Originally the driver was only targeted to 12bits sensors. When support
for 8bits sensors was added, some slight difference in the registers were
overlooked. This should fix it, both for initialization, and for
displaying the rate.
Reported-by: Kalhan Trisal <kalhan.trisal@intel.com>
Reported-by: Christoph Plattner <christoph.plattner@gmx.at>
Tested-by: Christoph Plattner <christoph.plattner@gmx.at>
Tested-by: Samu Onkalo <samu.p.onkalo@nokia.com>
Signed-off-by: Éric Piel <eric.piel@tremplin-utc.net>
Signed-off-by: Samu Onkalo <samu.p.onkalo@nokia.com>
Cc: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Oleg Nesterov [Fri, 2 Apr 2010 16:05:12 +0000 (18:05 +0200)]
tty: release_one_tty() forgets to put pids
commit
6da8d866d0d39e9509ff826660f6a86a6757c966 upstream.
release_one_tty(tty) can be called when tty still has a reference
to pgrp/session. In this case we leak the pid.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-and-tested-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Thomas Gleixner [Wed, 31 Mar 2010 11:30:19 +0000 (13:30 +0200)]
genirq: Force MSI irq handlers to run with interrupts disabled
commit
753649dbc49345a73a2454c770a3f2d54d11aec6 upstream.
Network folks reported that directing all MSI-X vectors of their multi
queue NICs to a single core can cause interrupt stack overflows when
enough interrupts fire at the same time.
This is caused by the fact that we run interrupt handlers by default
with interrupts enabled unless the driver reuqests the interrupt with
the IRQF_DISABLED set. The NIC handlers do not set this flag, so
simultaneous interrupts can nest unlimited and cause the stack
overflow.
The only safe counter measure is to run the interrupt handlers with
interrupts disabled. We can't switch to this mode in general right
now, but it is safe to do so for MSI interrupts.
Force IRQF_DISABLED for MSI interrupt handlers.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Linus Torvalds <torvalds@osdl.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: David Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@suse.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Seth Heasley [Thu, 25 Mar 2010 23:14:41 +0000 (16:14 -0700)]
WATCHDOG: iTCO_wdt: TCO Watchdog patch for additional Intel Cougar Point DeviceIDs
commit
4c7d849204341dea19be941a3c1eb4bdffac9cc4 upstream.
This patch adds the Intel Cougar Point PCH LPC Controller DeviceIDs for iTCO Watchdog.
Signed-off-by: Seth Heasley <seth.heasley@intel.com>
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Thomas Mingarelli [Wed, 17 Mar 2010 15:33:31 +0000 (15:33 +0000)]
WATCHDOG: hpwdt - fix lower timeout limit
commit
8ba42bd88c6982fe224b09c33151c797b0fdf1a5 upstream.
[Novell Bug 581103] HP Watchdog driver has arbitrary (wrong) timeout limits.
Fix the lower timeout limit to a more appropriate value.
Signed-off-by: Thomas Mingarelli <Thomas.Mingarelli@hp.com>
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Wey-Yi Guy [Wed, 3 Feb 2010 17:28:55 +0000 (09:28 -0800)]
mac80211: tear down all agg queues when restart/reconfig hw
commit
74e2bd1fa3ae9695af566ad5a7a288898787b909 upstream.
When there is a need to restart/reconfig hw, tear down all the
aggregation queues and let the mac80211 and driver get in-sync to have
the opportunity to re-establish the aggregation queues again.
Need to wait until driver re-establish all the station information before tear
down the aggregation queues, driver(at least iwlwifi driver) will reject the
stop aggregation queue request if station is not ready. But also need to make
sure the aggregation queues are tear down before waking up the queues, so
mac80211 will not sending frames with aggregation bit set.
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Johannes Berg [Mon, 22 Mar 2010 20:42:43 +0000 (13:42 -0700)]
mac80211: move netdev queue enabling to correct spot
commit
7236fe29fd72d17074574ba312e7f1bb9d10abaa upstream.
"mac80211: fix skb buffering issue" still left a race
between enabling the hardware queues and the virtual
interface queues. In hindsight it's totally obvious
that enabling the netdev queues for a hardware queue
when the hardware queue is enabled is wrong, because
it could well possible that we can fill the hw queue
with packets we already have pending. Thus, we must
only enable the netdev queues once all the pending
packets have been processed and sent off to the device.
In testing, I haven't been able to trigger this race
condition, but it's clearly there, possibly only when
aggregation is being enabled.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Valentin Longchamp [Fri, 26 Mar 2010 10:44:33 +0000 (11:44 +0100)]
setup correct int pipe type in ar9170_usb_exec_cmd
commit
2d20c72c021d96f8b9230396c8e3782f204214ec upstream.
An int urb is constructed but we fill it in with a bulk pipe type.
Commit
f661c6f8c67bd55e93348f160d590ff9edf08904 implemented a pipe type
check when CONFIG_USB_DEBUG is enabled. The check failed for all the ar9170
usb transfers and the driver could not configure the wifi dongle.
This went unnoticed until now because most people don't have
CONFIG_USB_DEBUG enabled.
Signed-off-by: Valentin Longchamp <valentin.longchamp@epfl.ch>
Acked-by: Christian Lamparter <chunkeey@googlemail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dan Carpenter [Sun, 28 Mar 2010 11:55:00 +0000 (14:55 +0300)]
iwlwifi: range checking issue
commit
8e1a53c615e8efe0fac670f2973da64758748a8a upstream.
IWL_RATE_COUNT is 13 and IWL_RATE_COUNT_LEGACY is 12.
IWL_RATE_COUNT_LEGACY is the right one here because iwl3945_rates
doesn't support 60M and also that's how "rates" is defined in
iwlcore_init_geos() from drivers/net/wireless/iwlwifi/iwl-core.c.
rates = kzalloc((sizeof(struct ieee80211_rate) * IWL_RATE_COUNT_LEGACY),
GFP_KERNEL);
Signed-off-by: Dan Carpenter <error27@gmail.com>
Acked-by: Zhu Yi <yi.zhu@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Stanislaw Gruszka [Thu, 18 Mar 2010 14:29:33 +0000 (14:29 +0000)]
iwlwifi: fix nfreed--
During backporting of
a120e912eb51e347f36c71b60a1d13af74d30e83
("iwlwifi: sanity check before counting number of tfds can be free")
we forget one hunk, what make lot of messages "free more than
tfds_in_queue" show up in dmesg.
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Tested-by: Adel Gadllah <adel.gadllah@gmail.com>
(picked from https://patchwork.kernel.org/patch/86722/)
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Wey-Yi Guy [Thu, 18 Mar 2010 16:05:00 +0000 (09:05 -0700)]
iwlwifi: counting number of tfds can be free for 4965
commit
be6b38bcb175613f239e0b302607db346472c6b6 upstream.
Forget one hunk in 4965 during "iwlwifi: error checking for number of tfds
in queue" patch.
Reported-by: Shanyu Zhao <shanyu.zhao@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Matt Helsley [Fri, 26 Mar 2010 22:51:44 +0000 (23:51 +0100)]
Freezer: Fix buggy resume test for tasks frozen with cgroup freezer
commit
5a7aadfe2fcb0f69e2acc1fbefe22a096e792fc9 upstream.
When the cgroup freezer is used to freeze tasks we do not want to thaw
those tasks during resume. Currently we test the cgroup freezer
state of the resuming tasks to see if the cgroup is FROZEN. If so
then we don't thaw the task. However, the FREEZING state also indicates
that the task should remain frozen.
This also avoids a problem pointed out by Oren Ladaan: the freezer state
transition from FREEZING to FROZEN is updated lazily when userspace reads
or writes the freezer.state file in the cgroup filesystem. This means that
resume will thaw tasks in cgroups which should be in the FROZEN state if
there is no read/write of the freezer.state file to trigger this
transition before suspend.
NOTE: Another "simple" solution would be to always update the cgroup
freezer state during resume. However it's a bad choice for several reasons:
Updating the cgroup freezer state is somewhat expensive because it requires
walking all the tasks in the cgroup and checking if they are each frozen.
Worse, this could easily make resume run in N^2 time where N is the number
of tasks in the cgroup. Finally, updating the freezer state from this code
path requires trickier locking because of the way locks must be ordered.
Instead of updating the freezer state we rely on the fact that lazy
updates only manage the transition from FREEZING to FROZEN. We know that
a cgroup with the FREEZING state may actually be FROZEN so test for that
state too. This makes sense in the resume path even for partially-frozen
cgroups -- those that really are FREEZING but not FROZEN.
Reported-by: Oren Ladaan <orenl@cs.columbia.edu>
Signed-off-by: Matt Helsley <matthltc@us.ibm.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Mike Christie [Tue, 9 Mar 2010 20:14:51 +0000 (14:14 -0600)]
libiscsi: Fix recovery slowdown regression
commit
4ae0a6c15efcc37e94e3f30e3533bdec03c53126 upstream.
We could be failing/stopping a connection due to libiscsi starting
recovery/cleanup, but the xmit path or scsi eh thread path
could be dropping the connection at the same time.
As a result the session->state gets set to failed instead of in
recovery. We end up not blocking the session
and so the replacement timeout never gets started and we only end up
failing the IO when scsi_softirq_done sees that the
cmd has been running for (cmd->allowed + 1) * rq->timeout secs.
We used to fail the IO right away so users are seeing a long
delay when using dm-multipath. This problem was added in
2.6.28.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Andrew Stubbs [Mon, 29 Mar 2010 03:04:19 +0000 (12:04 +0900)]
sh: Fix FDPIC binary loader
commit
d5ab780305bb6d60a7b5a74f18cf84eb6ad153b1 upstream.
Ensure that the aux table is properly initialized, even when optional
features are missing. Without this, the FDPIC loader did not work.
Signed-off-by: Andrew Stubbs <ams@codesourcery.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Matt Fleming [Sun, 28 Mar 2010 20:08:25 +0000 (20:08 +0000)]
sh: Enable the mmu in start_secondary()
commit
4bea3418c737891894b9d3d3e9f8bbd67d66fa38 upstream.
For the boot, enable_mmu() is called from setup_arch() but we don't call
setup_arch() for any of the other cpus. So turn on the non-boot cpu's
mmu inside of start_secondary().
I noticed this bug on an SMP board when trying to map I/O memory
(smsc911x registers) into the kernel address space. Since the Address
Translation bit in MMUCR wasn't set, accessing the virtual address where
the smsc911x registers were supposedly mapped actually performed a
physical address access.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Christoph Hellwig [Thu, 11 Mar 2010 22:42:17 +0000 (09:42 +1100)]
xfs: fix locking for inode cache radix tree tag updates
commit
f1f724e4b523d444c5a598d74505aefa3d6844d2 upstream
The radix-tree code requires it's users to serialize tag updates
against other updates to the tree. While XFS protects tag updates
against each other it does not serialize them against updates of the
tree contents, which can lead to tag corruption. Fix the inode
cache to always take pag_ici_lock in exclusive mode when updating
radix tree tags.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Patrick Schreurs <patrick@news-service.com>
Tested-by: Patrick Schreurs <patrick@news-service.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dave Chinner [Thu, 11 Mar 2010 22:42:16 +0000 (09:42 +1100)]
xfs: Non-blocking inode locking in IO completion
commit
77d7a0c2eeb285c9069e15396703d0cb9690ac50 upstream
The introduction of barriers to loop devices has created a new IO
order completion dependency that XFS does not handle. The loop
device implements barriers using fsync and so turns a log IO in the
XFS filesystem on the loop device into a data IO in the backing
filesystem. That is, the completion of log IOs in the loop
filesystem are now dependent on completion of data IO in the backing
filesystem.
This can cause deadlocks when a flush daemon issues a log force with
an inode locked because the IO completion of IO on the inode is
blocked by the inode lock. This in turn prevents further data IO
completion from occuring on all XFS filesystems on that CPU (due to
the shared nature of the completion queues). This then prevents the
log IO from completing because the log is waiting for data IO
completion as well.
The fix for this new completion order dependency issue is to make
the IO completion inode locking non-blocking. If the inode lock
can't be grabbed, simply requeue the IO completion back to the work
queue so that it can be processed later. This prevents the
completion queue from being blocked and allows data IO completion on
other inodes to proceed, hence avoiding completion order dependent
deadlocks.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Christoph Hellwig [Thu, 11 Mar 2010 22:42:15 +0000 (09:42 +1100)]
xfs: remove invalid barrier optimization from xfs_fsync
commit
e8b217e7530c6a073ac69f1c85b922d93fdf5647 upstream
Date: Tue, 2 Feb 2010 10:16:26 +1100
We always need to flush the disk write cache and can't skip it just because
the no inode attributes have changed.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dave Chinner [Thu, 11 Mar 2010 22:42:14 +0000 (09:42 +1100)]
xfs: don't hold onto reserved blocks on remount, ro
commit
cbe132a8bdcff0f9afd9060948fb50597c7400b8 upstream
If we hold onto reserved blocks when doing a remount,ro we end
up writing the blocks used count to disk that includes the reserved
blocks. Reserved blocks are not actually used, so this results in
the values in the superblock being incorrect.
Hence if we run xfs_check or xfs_repair -n while the filesystem is
mounted remount,ro we end up with an inconsistent filesystem being
reported. Also, running xfs_copy on the remount,ro filesystem will
result in an inconsistent image being generated.
To fix this, unreserve the blocks when doing the remount,ro, and
reserved them again on remount,rw. This way a remount,ro filesystem
will appear consistent on disk to all utilities.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Christoph Hellwig [Thu, 11 Mar 2010 22:42:13 +0000 (09:42 +1100)]
xfs: quota limit statvfs available blocks
commit
9b00f30762fe9f914eb6e03057a616ed63a4e8ca upstream
A "df" run on an NFS client of an exported XFS file system reports
the wrong information for "available" blocks. When a block quota is
enforced, the amount reported as free is limited by the quota, but
the amount reported available is not (and should be).
Reported-by: Guk-Bong, Kwon <gbkwon@gmail.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dave Chinner [Thu, 11 Mar 2010 22:42:12 +0000 (09:42 +1100)]
xfs: xfs_swap_extents needs to handle dynamic fork offsets
commit
e09f98606dcc156de1146c209d45a0d6d5f51c3f upstream
When swapping extents, we can corrupt inodes by swapping data forks
that are in incompatible formats. This is caused by the two indoes
having different fork offsets due to the presence of an attribute
fork on an attr2 filesystem. xfs_fsr tries to be smart about
setting the fork offset, but the trick it plays only works on attr1
(old fixed format attribute fork) filesystems.
Changing the way xfs_fsr sets up the attribute fork will prevent
this situation from ever occurring, so in the kernel code we can get
by with a preventative fix - check that the data fork in the
defragmented inode is in a format valid for the inode it is being
swapped into. This will lead to files that will silently and
potentially repeatedly fail defragmentation, so issue a warning to
the log when this particular failure occurs to let us know that
xfs_fsr needs updating/fixing.
To help identify how to improve xfs_fsr to avoid this issue, add
trace points for the inodes being swapped so that we can determine
why the swap was rejected and to confirm that the code is making the
right decisions and modifications when swapping forks.
A further complication is even when the swap is allowed to proceed
when the fork offset is different between the two inodes then value
for the maximum number of extents the data fork can hold can be
wrong. Make sure these are also set correctly after the swap occurs.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dave Chinner [Thu, 11 Mar 2010 22:42:11 +0000 (09:42 +1100)]
xfs: fix stale inode flush avoidance
commit
4b6a46882cca8349e8942e2650c33b11bc571c92 upstream
When reclaiming stale inodes, we need to guarantee that inodes are
unpinned before returning with a "clean" status. If we don't we can
reclaim inodes that are pinned, leading to use after free in the
transaction subsystem as transactions complete.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dave Chinner [Thu, 11 Mar 2010 22:42:10 +0000 (09:42 +1100)]
xfs: reclaim all inodes by background tree walks
commit
57817c68229984818fea9e614d6f95249c3fb098 upstream
We cannot do direct inode reclaim without taking the flush lock to
ensure that we do not reclaim an inode under IO. We check the inode
is clean before doing direct reclaim, but this is not good enough
because the inode flush code marks the inode clean once it has
copied the in-core dirty state to the backing buffer.
It is the flush lock that determines whether the inode is still
under IO, even though it is marked clean, and the inode is still
required at IO completion so we can't reclaim it even though it is
clean in core. Hence the requirement that we need to take the flush
lock even on clean inodes because this guarantees that the inode
writeback IO has completed and it is safe to reclaim the inode.
With delayed write inode flushing, we could end up waiting a long
time on the flush lock even for a clean inode. The background
reclaim already handles this efficiently, so avoid all the problems
by killing the direct reclaim path altogether.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dave Chinner [Thu, 11 Mar 2010 22:42:09 +0000 (09:42 +1100)]
xfs: Avoid inodes in reclaim when flushing from inode cache
commit
018027be90a6946e8cf3f9b17b5582384f7ed117 upstream
The reclaim code will handle flushing of dirty inodes before reclaim
occurs, so avoid them when determining whether an inode is a
candidate for flushing to disk when walking the radix trees. This
is based on a test patch from Christoph Hellwig.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dave Chinner [Thu, 11 Mar 2010 22:42:08 +0000 (09:42 +1100)]
xfs: reclaim inodes under a write lock
commit
c8e20be020f234c8d492927a424a7d8bbefd5b5d upstream
Make the inode tree reclaim walk exclusive to avoid races with
concurrent sync walkers and lookups. This is a version of a patch
posted by Christoph Hellwig that avoids all the code duplication.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dave Chinner [Thu, 11 Mar 2010 22:42:07 +0000 (09:42 +1100)]
xfs: Ensure we force all busy extents in range to disk
commit
fd45e4784164d1017521086524e3442318c67370 upstream
When we search for and find a busy extent during allocation we
force the log out to ensure the extent free transaction is on
disk before the allocation transaction. The current implementation
has a subtle bug in it--it does not handle multiple overlapping
ranges.
That is, if we free lots of little extents into a single
contiguous extent, then allocate the contiguous extent, the busy
search code stops searching at the first extent it finds that
overlaps the allocated range. It then uses the commit LSN of the
transaction to force the log out to.
Unfortunately, the other busy ranges might have more recent
commit LSNs than the first busy extent that is found, and this
results in xfs_alloc_search_busy() returning before all the
extent free transactions are on disk for the range being
allocated. This can lead to potential metadata corruption or
stale data exposure after a crash because log replay won't replay
all the extent free transactions that cover the allocation range.
Modified-by: Alex Elder <aelder@sgi.com>
(Dropped the "found" argument from the xfs_alloc_busysearch trace
event.)
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dave Chinner [Thu, 11 Mar 2010 22:42:06 +0000 (09:42 +1100)]
xfs: Don't flush stale inodes
commit
44e08c45cc14e6190a424be8d450070c8e508fad upstream
Because inodes remain in cache much longer than inode buffers do
under memory pressure, we can get the situation where we have
stale, dirty inodes being reclaimed but the backing storage has
been freed. Hence we should never, ever flush XFS_ISTALE inodes
to disk as there is no guarantee that the backing buffer is in
cache and still marked stale when the flush occurs.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Christoph Hellwig [Thu, 11 Mar 2010 22:42:05 +0000 (09:42 +1100)]
xfs: fix timestamp handling in xfs_setattr
commit
d6d59bada372bcf8bd36c3bbc71c485c29dd2a4b upstream
We currently have some rather odd code in xfs_setattr for
updating the a/c/mtime timestamps:
- first we do a non-transaction update if all three are updated
together
- second we implicitly update the ctime for various changes
instead of relying on the ATTR_CTIME flag
- third we set the timestamps to the current time instead of the
arguments in the iattr structure in many cases.
This patch makes sure we update it in a consistent way:
- always transactional
- ctime is only updated if ATTR_CTIME is set or we do a size
update, which is a special case
- always to the times passed in from the caller instead of the
current time
The only non-size caller of xfs_setattr that doesn't come from
the VFS is updated to set ATTR_CTIME and pass in a valid ctime
value.
Reported-by: Eric Blake <ebb9@byu.net>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Christoph Hellwig [Thu, 11 Mar 2010 22:42:04 +0000 (09:42 +1100)]
xfs: check for not fully initialized inodes in xfs_ireclaim
commit
b44b1126279b60597f96bbe77507b1650f88a969 upstream
Add an assert for inodes not added to the inode cache in xfs_ireclaim,
to make sure we're not going to introduce something like the
famous nfsd inode cache bug again.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Jason Gunthorpe [Thu, 11 Mar 2010 22:42:03 +0000 (09:42 +1100)]
xfs: Fix error return for fallocate() on XFS
commit
44a743f68705c681439f264deb05f8f38e9048d3 upstream
Noticed that through glibc fallocate would return 28 rather than -1
and errno = 28 for ENOSPC. The xfs routines uses XFS_ERROR format
positive return error codes while the syscalls use negative return
codes. Fixup the two cases in xfs_vn_fallocate syscall to convert to
negative.
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Reviewed-by: Eric Sandeen <sandeen@sandeen.net>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Andy Poling [Thu, 11 Mar 2010 22:42:02 +0000 (09:42 +1100)]
xfs: Wrapped journal record corruption on read at recovery
commit
fc5bc4c85c45f0bf854404e5736aa8b65720a18d upstream
Summary of problem:
If a journal record wraps at the physical end of the journal, it has to be
read in two parts in xlog_do_recovery_pass(): a read at the physical end and a
read at the physical beginning. If xlog_bread() has to re-align the first
read, the second read request does not take that re-alignment into account.
If the first read was re-aligned, the second read over-writes the end of the
data from the first read, effectively corrupting it. This can happen either
when reading the record header or reading the record data.
The first sanity check in xlog_recover_process_data() is to check for a valid
clientid, so that is the error reported.
Summary of fix:
If there was a first read at the physical end, XFS_BUF_PTR() returns where the
data was requested to begin. Conversely, because it is the result of
xlog_align(), offset indicates where the requested data for the first read
actually begins - whether or not xlog_bread() has re-aligned it.
Using offset as the base for the calculation of where to place the second read
data ensures that it will be correctly placed immediately following the data
from the first read instead of sometimes over-writing the end of it.
The attached patch has resolved the reported problem of occasional inability
to recover the journal (reporting "bad clientid").
Signed-off-by: Andy Poling <andy@realbig.com>
Reviewed-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Christoph Hellwig [Thu, 11 Mar 2010 22:42:01 +0000 (09:42 +1100)]
xfs: I/O completion handlers must use NOFS allocations
commit
80641dc66a2d6dfb22af4413227a92b8ab84c7bb upstream
When completing I/O requests we must not allow the memory allocator to
recurse into the filesystem, as we might deadlock on waiting for the
I/O completion otherwise. The only thing currently allocating normal
GFP_KERNEL memory is the allocation of the transaction structure for
the unwritten extent conversion. Add a memflags argument to
_xfs_trans_alloc to allow controlling the allocator behaviour.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Thomas Neumann <tneumann@users.sourceforge.net>
Tested-by: Thomas Neumann <tneumann@users.sourceforge.net>
Reviewed-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Christoph Hellwig [Thu, 11 Mar 2010 22:42:00 +0000 (09:42 +1100)]
xfs: fix mmap_sem/iolock inversion in xfs_free_eofblocks
commit
c56c9631cbe88f08854a56ff9776c1f310916830 upstream
When xfs_free_eofblocks is called from ->release the VM might already
hold the mmap_sem, but in the write path we take the iolock before
taking the mmap_sem in the generic write code.
Switch xfs_free_eofblocks to only trylock the iolock if called from
->release and skip trimming the prellocated blocks in that case.
We'll still free them later on the final iput.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Christoph Hellwig [Thu, 11 Mar 2010 22:41:59 +0000 (09:41 +1100)]
xfs: simplify inode teardown
commit
848ce8f731aed0a2d4ab5884a4f6664af73d2dd0 upstream
Currently the reclaim code for the case where we don't reclaim the
final reclaim is overly complicated. We know that the inode is clean
but instead of just directly reclaiming the clean inode we go through
the whole process of marking the inode reclaimable just to directly
reclaim it from the calling context. Besides being overly complicated
this introduces a race where iget could recycle an inode between
marked reclaimable and actually being reclaimed leading to panics.
This patch gets rid of the existing reclaim path, and replaces it with
a simple call to xfs_ireclaim if the inode was clean. While we're at
it we also use the slightly more lax xfs_inode_clean check we'd use
later to determine if we need to flush the inode here.
Finally get rid of xfs_reclaim function and place the remaining small
bits of reclaim code directly into xfs_fs_destroy_inode.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Patrick Schreurs <patrick@news-service.com>
Reported-by: Tommy van Leeuwen <tommy@news-service.com>
Tested-by: Patrick Schreurs <patrick@news-service.com>
Reviewed-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Chris Wilson [Thu, 18 Mar 2010 11:56:54 +0000 (11:56 +0000)]
drm: Return ENODEV if the inode mapping changes
commit
da58405860b992d2bb21ebae5d685fe3204dd3f0 upstream.
Replace a BUG_ON with an error code in the event that the inode mapping
changes between calls to drm_open. This may happen for instance if udev
is loaded subsequent to the original opening of the device:
[ 644.291870] kernel BUG at drivers/gpu/drm/drm_fops.c:146!
[ 644.291876] invalid opcode: 0000 [#1] SMP
[ 644.291882] last sysfs file: /sys/kernel/uevent_seqnum
[ 644.291888]
[ 644.291895] Pid: 7276, comm: lt-cairo-test-s Not tainted 2.6.34-rc1 #2 N150/N210/N220 /N150/N210/N220
[ 644.291903] EIP: 0060:[<
c11c70e3>] EFLAGS:
00210283 CPU: 0
[ 644.291912] EIP is at drm_open+0x4b1/0x4e2
[ 644.291918] EAX:
f72d8d18 EBX:
f790a400 ECX:
f73176b8 EDX:
00000000
[ 644.291923] ESI:
f790a414 EDI:
f790a414 EBP:
f647ae20 ESP:
f647adfc
[ 644.291929] DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
[ 644.291937] Process lt-cairo-test-s (pid: 7276, ti=
f647a000 task=
f73f5c80 task.ti=
f647a000)
[ 644.291941] Stack:
[ 644.291945]
00000000 f7bb7400 00000080 f6451100 f73176b8 f6479214 f6451100 f73176b8
[ 644.291957] <0>
c1297ce0 f647ae34 c11c6c04 f73176b8 f7949800 00000000 f647ae54 c1080ac5
[ 644.291969] <0>
f7949800 f6451100 00000000 f6451100 f73176b8 f6452780 f647ae70 c107d1e6
[ 644.291982] Call Trace:
[ 644.291991] [<
c11c6c04>] ? drm_stub_open+0x8a/0xb8
[ 644.292000] [<
c1080ac5>] ? chrdev_open+0xef/0x106
[ 644.292008] [<
c107d1e6>] ? __dentry_open+0xd4/0x1a6
[ 644.292015] [<
c107d35b>] ? nameidata_to_filp+0x31/0x45
[ 644.292022] [<
c10809d6>] ? chrdev_open+0x0/0x106
[ 644.292030] [<
c10864e2>] ? do_last+0x346/0x423
[ 644.292037] [<
c108789f>] ? do_filp_open+0x190/0x415
[ 644.292046] [<
c1071eb5>] ? handle_mm_fault+0x214/0x710
[ 644.292053] [<
c107d008>] ? do_sys_open+0x4d/0xe9
[ 644.292061] [<
c1016462>] ? do_page_fault+0x211/0x23f
[ 644.292068] [<
c107d0f0>] ? sys_open+0x23/0x2b
[ 644.292075] [<
c1002650>] ? sysenter_do_call+0x12/0x26
[ 644.292079] Code: 89 f0 89 55 dc e8 8d 96 0a 00 8b 45 e0 8b 55 dc 83 78 04 01 75 28 8b 83 18 02 00 00 85 c0 74 0f 8b 4d ec 3b 81 ac 00 00 00 74 13 <0f> 0b eb fe 8b 4d ec 8b 81 ac 00 00 00 89 83 18 02 00 00 89 f0
[ 644.292143] EIP: [<
c11c70e3>] drm_open+0x4b1/0x4e2 SS:ESP 0068:
f647adfc
[ 644.292175] ---[ end trace
2ddd476af89a60fa ]---
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Alex Deucher [Wed, 10 Mar 2010 23:33:03 +0000 (18:33 -0500)]
drm/radeon/kms: fix pal tv-out support on legacy IGP chips
commit
15f7207761cfcf8f53fb6e5cacffe060478782c3 upstream.
Based on ddx patch by Andrzej Hajda.
Signed-off-by: Alex Deucher <alexdeucher@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dave Airlie [Fri, 19 Mar 2010 00:33:44 +0000 (10:33 +1000)]
drm/radeon/kms: don't print error on -ERESTARTSYS.
commit
97f23b3d85a4d734a8584dade3a34579931c8f8d upstream.
We can get this if the user moves the mouse when we are waiting to move
some stuff around in the validate. Don't fail.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Oleg Nesterov [Thu, 1 Apr 2010 13:13:57 +0000 (15:13 +0200)]
oom: fix the unsafe usage of badness() in proc_oom_score()
commit
b95c35e76b29ba812e5dabdd91592e25ec640e93 upstream.
proc_oom_score(task) has a reference to task_struct, but that is all.
If this task was already released before we take tasklist_lock
- we can't use task->group_leader, it points to nowhere
- it is not safe to call badness() even if this task is
->group_leader, has_intersects_mems_allowed() assumes
it is safe to iterate over ->thread_group list.
- even worse, badness() can hit ->signal == NULL
Add the pid_alive() check to ensure __unhash_process() was not called.
Also, use "task" instead of task->group_leader. badness() should return
the same result for any sub-thread. Currently this is not true, but
this should be changed anyway.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Nikolaus Schulz [Wed, 31 Mar 2010 17:21:10 +0000 (02:21 +0900)]
fat: fix buffer overflow in vfat_create_shortname()
commit
30d1872d9eb3663b4cf7bdebcbf5cd465674cced upstream.
When using the string representation of a random counter as part of the base
name, ensure that it is no longer than 4 bytes.
Since we are repeatedly decrementing the counter in a loop until we have found a
unique base name, the counter may wrap around zero; therefore, it is not enough
to mask its higher bits before entering the loop, this must be done inside the
loop.
[hirofumi@mail.parknet.co.jp: use snprintf()]
Signed-off-by: Nikolaus Schulz <microschulz@web.de>
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>