Takashi Iwai [Thu, 11 Sep 2014 10:59:21 +0000 (12:59 +0200)]
ALSA: hda - Fix invalid pin powermap without jack detection
commit
7a9744cb455e6faa287e148394b4b422a6f3c5c4 upstream.
When a driver is set up without the jack detection explicitly (either
by passing a model option or via a specific fixup), the pin powermap
of IDT/STAC codecs is set up wrongly, resulting in the silence
output. It's because of a logic failure in stac_init_power_map().
It tries to avoid creating a callback for the pins that have other
auto-hp and auto-mic callbacks, but the check is done in a wrong way
at a wrong time. The stac_init_power_map() should be called after
creating other jack detection ctls, and the jack callback should be
created only for jack-detectable widgets.
This patch fixes the check in stac_init_power_map() and its callee
at the right place, after snd_hda_gen_build_controls().
Reported-by: Adam Richter <adam_richter2004@yahoo.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Iwai [Tue, 2 Sep 2014 05:21:56 +0000 (07:21 +0200)]
ALSA: hda - Fix COEF setups for ALC1150 codec
commit
acf08081adb5e8fe0519eb97bb49797ef52614d6 upstream.
ALC1150 codec seems to need the COEF- and PLL-setups just like its
compatible ALC882 codec. Some machines (e.g. SunMicro X10SAT) show
the problem like too low output volumes unless the COEF setup is
applied.
Reported-and-tested-by: Dana Goyette <danagoyette@gmail.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Clemens Ladisch [Thu, 21 Aug 2014 18:55:21 +0000 (20:55 +0200)]
ALSA: core: fix buffer overflow in snd_info_get_line()
commit
ddc64b278a4dda052390b3de1b551e59acdff105 upstream.
snd_info_get_line() documents that its last parameter must be one
less than the buffer size, but this API design guarantees that
(literally) every caller gets it wrong.
Just change this parameter to have its obvious meaning.
Reported-by: Tommi Rantala <tt.rantala@gmail.com>
Signed-off-by: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Will Deacon [Fri, 22 Aug 2014 13:13:24 +0000 (14:13 +0100)]
arm64: ptrace: fix compat hardware watchpoint reporting
commit
27d7ff273c2aad37b28f6ff0cab2cfa35b51e648 upstream.
I'm not sure what I was on when I wrote this, but when iterating over
the hardware watchpoint array (hbp_watch_array), our index is off by
ARM_MAX_BRP, so we walk off the end of our thread_struct...
... except, a dodgy condition in the loop means that it never executes
at all (bp cannot be NULL).
This patch fixes the code so that we remove the bp check and use the
correct index for accessing the watchpoint structures.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Josef Bacik [Mon, 25 Aug 2014 17:59:41 +0000 (13:59 -0400)]
trace: Fix epoll hang when we race with new entries
commit
4ce97dbf50245227add17c83d87dc838e7ca79d0 upstream.
Epoll on trace_pipe can sometimes hang in a weird case. If the ring buffer is
empty when we set waiters_pending but an event shows up exactly at that moment
we can miss being woken up by the ring buffers irq work. Since
ring_buffer_empty() is inherently racey we will sometimes think that the buffer
is not empty. So we don't get woken up and we don't think there are any events
even though there were some ready when we added the watch, which makes us hang.
This patch fixes this by making sure that we are actually on the wait list
before we set waiters_pending, and add a memory barrier to make sure
ring_buffer_empty() is going to be correct.
Link: http://lkml.kernel.org/p/1408989581-23727-1-git-send-email-jbacik@fb.com
Cc: Martin Lau <kafai@fb.com>
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Simon Lindgren [Tue, 26 Aug 2014 19:13:24 +0000 (21:13 +0200)]
i2c: at91: Fix a race condition during signal handling in at91_do_twi_xfer.
commit
6721f28a26efd6368497abbdef5dcfc59608d899 upstream.
There is a race condition in at91_do_twi_xfer when signals arrive.
If a signal is recieved while waiting for a transfer to complete
wait_for_completion_interruptible_timeout() will return -ERESTARTSYS.
This is not handled correctly resulting in interrupts still being
enabled and a transfer being in flight when we return.
Symptoms include a range of oopses and bus lockups. Oopses can happen
when the transfer completes because the interrupt handler will corrupt
the stack. If a new transfer is started before the interrupt fires
the controller will start a new transfer in the middle of the old one,
resulting in confused slaves and a locked bus.
To avoid this, use wait_for_completion_io_timeout instead so that we
don't have to deal with gracefully shutting down the transfer and
disabling the interrupts.
Signed-off-by: Simon Lindgren <simon@aqwary.com>
Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marek Roszko [Thu, 21 Aug 2014 01:39:41 +0000 (21:39 -0400)]
i2c: at91: add bound checking on SMBus block length bytes
commit
75b81f339c6af43f6f4a1b3eabe0603321dade65 upstream.
The driver was not bound checking the received length byte to ensure it was within the
the buffer size that is allocated for SMBus blocks. This resulted in buffer overflows
whenever an invalid length byte was received.
It also failed to ensure the length byte was not zero. If it received zero, it would end up
in an infinite loop as the at91_twi_read_next_byte function returned immediately without
allowing RHR to be read to clear the RXRDY interrupt.
Tested agaisnt a SMBus compliant battery.
Signed-off-by: Marek Roszko <mark.roszko@gmail.com>
Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Will Deacon [Thu, 11 Sep 2014 13:38:16 +0000 (14:38 +0100)]
arm64: flush TLS registers during exec
commit
eb35bdd7bca29a13c8ecd44e6fd747a84ce675db upstream.
Nathan reports that we leak TLS information from the parent context
during an exec, as we don't clear the TLS registers when flushing the
thread state.
This patch updates the flushing code so that we:
(1) Unconditionally zero the tpidr_el0 register (since this is fully
context switched for native tasks and zeroed for compat tasks)
(2) Zero the tp_value state in thread_info before clearing the
tpidrr0_el0 register for compat tasks (since this is only writable
by the set_tls compat syscall and therefore not fully switched).
A missing compiler barrier is also added to the compat set_tls syscall.
Acked-by: Nathan Lynch <Nathan_Lynch@mentor.com>
Reported-by: Nathan Lynch <Nathan_Lynch@mentor.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Anton Blanchard [Fri, 22 Aug 2014 01:36:52 +0000 (11:36 +1000)]
ibmveth: Fix endian issues with rx_no_buffer statistic
commit
cbd5228199d8be45d895d9d0cc2b8ce53835fc21 upstream.
Hidden away in the last 8 bytes of the buffer_list page is a solitary
statistic. It needs to be byte swapped or else ethtool -S will
produce numbers that terrify the user.
Since we do this in multiple places, create a helper function with a
comment explaining what is going on.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Murali Karicheri [Fri, 5 Sep 2014 17:21:00 +0000 (13:21 -0400)]
ahci: add pcid for Marvel 0x9182 controller
commit
c5edfff9db6f4d2c35c802acb4abe0df178becee upstream.
Keystone K2E EVM uses Marvel 0x9182 controller. This requires support
for the ID in the ahci driver.
Signed-off-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
James Ralston [Wed, 27 Aug 2014 21:29:07 +0000 (14:29 -0700)]
ahci: Add Device IDs for Intel 9 Series PCH
commit
1b071a0947dbce5c184c12262e02540fbc493457 upstream.
This patch adds the AHCI mode SATA Device IDs for the Intel 9 Series PCH.
Signed-off-by: James Ralston <james.d.ralston@intel.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Arjun Sreedharan [Sun, 17 Aug 2014 14:30:09 +0000 (20:00 +0530)]
pata_scc: propagate return value of scc_wait_after_reset
commit
4dc7c76cd500fa78c64adfda4b070b870a2b993c upstream.
scc_bus_softreset not necessarily should return zero.
Propagate the error code.
Signed-off-by: Arjun Sreedharan <arjun024@gmail.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiri Kosina [Thu, 7 Aug 2014 14:29:53 +0000 (16:29 +0200)]
drm/i915: read HEAD register back in init_ring_common() to enforce ordering
commit
ece4a17d237a79f63fbfaf3f724a12b6d500555c upstream.
Withtout this, ring initialization fails reliabily during resume with
[drm:init_ring_common] *ERROR* render ring initialization failed ctl
0001f001 head
ffffff8804 tail
00000000 start
000e4000
This is not a complete fix, but it is verified to make the ring
initialization failures during resume much less likely.
We were not able to root-cause this bug (likely HW-specific to Gen4 chips)
yet. This is therefore used as a ducttape before problem is fully
understood and proper fix created, so that people don't suffer from
completely unusable systems in the meantime.
The discussion and debugging is happening at
https://bugs.freedesktop.org/show_bug.cgi?id=76554
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Deucher [Mon, 28 Jul 2014 03:21:50 +0000 (23:21 -0400)]
drm/radeon: load the lm63 driver for an lm64 thermal chip.
commit
5dc355325b648dc9b4cf3bea4d968de46fd59215 upstream.
Looks like the lm63 driver supports the lm64 as well.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tetsuo Handa [Sun, 3 Aug 2014 11:00:40 +0000 (20:00 +0900)]
drm/ttm: Choose a pool to shrink correctly in ttm_dma_pool_shrink_scan().
commit
46c2df68f03a236b30808bba361f10900c88d95e upstream.
We can use "unsigned int" instead of "atomic_t" by updating start_pool
variable under _manager->lock. This patch will make it possible to avoid
skipping when choosing a pool to shrink in round-robin style, after next
patch changes mutex_lock(_manager->lock) to !mutex_trylock(_manager->lork).
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tetsuo Handa [Sun, 3 Aug 2014 10:59:35 +0000 (19:59 +0900)]
drm/ttm: Fix possible division by 0 in ttm_dma_pool_shrink_scan().
commit
11e504cc705e8ccb06ac93a276e11b5e8fee4d40 upstream.
list_empty(&_manager->pools) being false before taking _manager->lock
does not guarantee that _manager->npools != 0 after taking _manager->lock
because _manager->npools is updated under _manager->lock.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guido Martínez [Tue, 17 Jun 2014 14:17:09 +0000 (11:17 -0300)]
drm/tilcdc: fix double kfree
commit
c9a3ad25eddfdb898114a9d73cdb4c3472d9dfca upstream.
display_timings_release calls kfree on the display_timings object passed
to it. Calling kfree after it is wrong. SLUB debug showed the following
warning:
=============================================================================
BUG kmalloc-64 (Tainted: G W ): Object already free
-----------------------------------------------------------------------------
Disabling lock debugging due to kernel taint
INFO: Allocated in of_get_display_timings+0x2c/0x214 age=601 cpu=0
pid=884
__slab_alloc.constprop.79+0x2e0/0x33c
kmem_cache_alloc+0xac/0xdc
of_get_display_timings+0x2c/0x214
panel_probe+0x7c/0x314 [tilcdc]
platform_drv_probe+0x18/0x48
[..snip..]
INFO: Freed in panel_destroy+0x18/0x3c [tilcdc] age=0 cpu=0 pid=907
__slab_free+0x34/0x330
panel_destroy+0x18/0x3c [tilcdc]
tilcdc_unload+0xd0/0x118 [tilcdc]
drm_dev_unregister+0x24/0x98
[..snip..]
Signed-off-by: Guido Martínez <guido@vanguardiasur.com.ar>
Tested-by: Darren Etheridge <detheridge@ti.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guido Martínez [Tue, 17 Jun 2014 14:17:08 +0000 (11:17 -0300)]
drm/tilcdc: fix release order on exit
commit
eb565a2bbadc6a5030a6dbe58db1aa52453e7edf upstream.
Unregister resources in the correct order on tilcdc_drm_fini, which is
the reverse order they were registered during tilcdc_drm_init.
This also means unregistering the driver before releasing its resources.
Signed-off-by: Guido Martínez <guido@vanguardiasur.com.ar>
Tested-by: Darren Etheridge <detheridge@ti.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guido Martínez [Tue, 17 Jun 2014 14:17:07 +0000 (11:17 -0300)]
drm/tilcdc: panel: fix leak when unloading the module
commit
3a49012224ca9016658a831a327ff6a7fe5bb4f9 upstream.
The driver did not unregister the allocated framebuffer, which caused
memory leaks (and memory manager WARNs) when unloading. Also, the
framebuffer device under /dev still existed after unloading.
Add a call to drm_fbdev_cma_fini when unloading the module to prevent
both issues.
Signed-off-by: Guido Martínez <guido@vanguardiasur.com.ar>
Tested-by: Darren Etheridge <detheridge@ti.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guido Martínez [Tue, 17 Jun 2014 14:17:06 +0000 (11:17 -0300)]
drm/tilcdc: tfp410: fix dangling sysfs connector node
commit
16dcbdef404f4e87dab985494381939fe0a2d456 upstream.
Add a drm_sysfs_connector_remove call when we destroy the panel to make
sure the connector node in sysfs gets deleted.
This is required for proper unload and re-load of this driver, otherwise
we will get a warning about a duplicate filename in sysfs.
Signed-off-by: Guido Martínez <guido@vanguardiasur.com.ar>
Tested-by: Darren Etheridge <detheridge@ti.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guido Martínez [Tue, 17 Jun 2014 14:17:05 +0000 (11:17 -0300)]
drm/tilcdc: slave: fix dangling sysfs connector node
commit
daa15b4cd1eee58eb1322062a3320b1dbe5dc96e upstream.
Add a drm_sysfs_connector_remove call when we destroy the panel to make
sure the connector node in sysfs gets deleted.
This is required for proper unload and re-load of this driver as a
module. Without this, we would get a warning at re-load time like so:
tda998x 0-0070: found TDA19988
------------[ cut here ]------------
WARNING: CPU: 0 PID: 825 at fs/sysfs/dir.c:31 sysfs_warn_dup+0x54/0x74()
sysfs: cannot create duplicate filename '/class/drm/card0-HDMI-A-1'
Modules linked in: [..]
CPU: 0 PID: 825 Comm: modprobe Not tainted
3.15.0-rc4-00027-g9dcdef4 #82
[<
c0013bb8>] (unwind_backtrace) from [<
c0011824>] (show_stack+0x10/0x14)
[<
c0011824>] (show_stack) from [<
c0034e8c>] (warn_slowpath_common+0x68/0x88)
[<
c0034e8c>] (warn_slowpath_common) from [<
c0034edc>] (warn_slowpath_fmt+0x30/0x40)
[<
c0034edc>] (warn_slowpath_fmt) from [<
c01243f4>] (sysfs_warn_dup+0x54/0x74)
[<
c01243f4>] (sysfs_warn_dup) from [<
c0124708>] (sysfs_do_create_link_sd.isra.2+0xb0/0xb8)
[<
c0124708>] (sysfs_do_create_link_sd.isra.2) from [<
c02ae37c>] (device_add+0x338/0x520)
[<
c02ae37c>] (device_add) from [<
c02ae6e8>] (device_create_groups_vargs+0xa0/0xc4)
[<
c02ae6e8>] (device_create_groups_vargs) from [<
c02ae758>] (device_create+0x24/0x2c)
[<
c02ae758>] (device_create) from [<
c029b4ec>] (drm_sysfs_connector_add+0x64/0x204)
[<
c029b4ec>] (drm_sysfs_connector_add) from [<
bf0b1b40>] (slave_modeset_init+0x120/0x1bc [tilcdc])
[<
bf0b1b40>] (slave_modeset_init [tilcdc]) from [<
bf0b2be8>] (tilcdc_load+0x214/0x4c0 [tilcdc])
[<
bf0b2be8>] (tilcdc_load [tilcdc]) from [<
c029955c>] (drm_dev_register+0xa4/0x104)
[..snip..]
---[ end trace
4df8d614936ebdee ]---
[drm:drm_sysfs_connector_add] *ERROR* failed to register connector device: -17
Signed-off-by: Guido Martínez <guido@vanguardiasur.com.ar>
Tested-by: Darren Etheridge <detheridge@ti.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guido Martínez [Tue, 17 Jun 2014 14:17:04 +0000 (11:17 -0300)]
drm/tilcdc: panel: fix dangling sysfs connector node
commit
e396900e649b0af31161634d87fe37076f46c12b upstream.
Add a drm_sysfs_connector_remove call when we destroy the panel to make
sure the connector node in sysfs gets deleted.
This is required for proper unload and re-load of this driver as a
module. Without this, we would get a warning at re-load time like so:
------------[ cut here ]------------
WARNING: CPU: 0 PID: 824 at fs/sysfs/dir.c:31 sysfs_warn_dup+0x54/0x74()
sysfs: cannot create duplicate filename '/class/drm/card0-LVDS-1'
Modules linked in: [...]
CPU: 0 PID: 824 Comm: modprobe Not tainted
3.15.0-rc4-00027-g6484f96-dirty #81
[<
c0013bb8>] (unwind_backtrace) from [<
c0011824>] (show_stack+0x10/0x14)
[<
c0011824>] (show_stack) from [<
c0034e8c>] (warn_slowpath_common+0x68/0x88)
[<
c0034e8c>] (warn_slowpath_common) from [<
c0034edc>] (warn_slowpath_fmt+0x30/0x40)
[<
c0034edc>] (warn_slowpath_fmt) from [<
c01243f4>] (sysfs_warn_dup+0x54/0x74)
[<
c01243f4>] (sysfs_warn_dup) from [<
c0124708>] (sysfs_do_create_link_sd.isra.2+0xb0/0xb8)
[<
c0124708>] (sysfs_do_create_link_sd.isra.2) from [<
c02ae37c>] (device_add+0x338/0x520)
[<
c02ae37c>] (device_add) from [<
c02ae6e8>] (device_create_groups_vargs+0xa0/0xc4)
[<
c02ae6e8>] (device_create_groups_vargs) from [<
c02ae758>] (device_create+0x24/0x2c)
[<
c02ae758>] (device_create) from [<
c029b4ec>] (drm_sysfs_connector_add+0x64/0x204)
[<
c029b4ec>] (drm_sysfs_connector_add) from [<
bf0b1fec>] (panel_modeset_init+0xb8/0x134 [tilcdc])
[<
bf0b1fec>] (panel_modeset_init [tilcdc]) from [<
bf0b2bf0>] (tilcdc_load+0x214/0x4c0 [tilcdc])
[<
bf0b2bf0>] (tilcdc_load [tilcdc]) from [<
c029955c>] (drm_dev_register+0xa4/0x104)
[ .. snip .. ]
---[ end trace
b2d09cd9578b0497 ]---
[drm:drm_sysfs_connector_add] *ERROR* failed to register connector device: -17
Signed-off-by: Guido Martínez <guido@vanguardiasur.com.ar>
Tested-by: Darren Etheridge <detheridge@ti.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ronald Wahl [Thu, 7 Aug 2014 12:15:50 +0000 (14:15 +0200)]
carl9170: fix sending URBs with wrong type when using full-speed
commit
671796dd96b6cd85b75fba9d3007bcf7e5f7c309 upstream.
The driver assumes that endpoint 4 is always an interrupt endpoint.
Unfortunately the type differs between high-speed and full-speed
configurations while in the former case it is indeed an interrupt
endpoint this is not true for the latter case - here it is a bulk
endpoint. When sending URBs with the wrong type the kernel will
generate a warning message including backtrace. In this specific
case there will be a huge amount of warnings which can bring the system
to freeze.
To fix this we are now sending URBs to endpoint 4 using the type
found in the endpoint descriptor.
A side note: The carl9170 firmware currently specifies endpoint 4 as
interrupt endpoint even in the full-speed configuration but this has
no relevance because before this firmware is loaded the endpoint type
is as described above and after the firmware is running the stick is not
reenumerated and so the old descriptor is used.
Signed-off-by: Ronald Wahl <ronald.wahl@raritan.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Wed, 17 Sep 2014 16:04:18 +0000 (09:04 -0700)]
Linux 3.10.55
Sage Weil [Mon, 4 Aug 2014 14:01:54 +0000 (07:01 -0700)]
libceph: gracefully handle large reply messages from the mon
commit
73c3d4812b4c755efeca0140f606f83772a39ce4 upstream.
We preallocate a few of the message types we get back from the mon. If we
get a larger message than we are expecting, fall back to trying to allocate
a new one instead of blindly using the one we have.
Signed-off-by: Sage Weil <sage@redhat.com>
Reviewed-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ilya Dryomov [Thu, 9 Jan 2014 18:08:21 +0000 (20:08 +0200)]
libceph: rename ceph_msg::front_max to front_alloc_len
commit
3cea4c3071d4e55e9d7356efe9d0ebf92f0c2204 upstream.
Rename front_max field of struct ceph_msg to front_alloc_len to make
its purpose more clear.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jason Gunthorpe [Thu, 22 May 2014 00:26:44 +0000 (18:26 -0600)]
tpm: Provide a generic means to override the chip returned timeouts
commit
8e54caf407b98efa05409e1fee0e5381abd2b088 upstream.
Some Atmel TPMs provide completely wrong timeouts from their
TPM_CAP_PROP_TIS_TIMEOUT query. This patch detects that and returns
new correct values via a DID/VID table in the TIS driver.
Tested on ARM using an AT97SC3204T FW version 37.16
[PHuewe: without this fix these 'broken' Atmel TPMs won't function on
older kernels]
Signed-off-by: "Berg, Christopher" <Christopher.Berg@atmel.com>
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Peter Huewe <peterhuewe@gmx.de>
[bwh: Backported to 3.10:
- Adjust filename, context
- s/chip->ops->/chip->vendor./]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Torvalds [Sat, 13 Sep 2014 18:30:10 +0000 (11:30 -0700)]
vfs: fix bad hashing of dentries
commit
99d263d4c5b2f541dfacb5391e22e8c91ea982a6 upstream.
Josef Bacik found a performance regression between 3.2 and 3.10 and
narrowed it down to commit
bfcfaa77bdf0 ("vfs: use 'unsigned long'
accesses for dcache name comparison and hashing"). He reports:
"The test case is essentially
for (i = 0; i <
1000000; i++)
mkdir("a$i");
On xfs on a fio card this goes at about 20k dir/sec with 3.2, and 12k
dir/sec with 3.10. This is because we spend waaaaay more time in
__d_lookup on 3.10 than in 3.2.
The new hashing function for strings is suboptimal for <
sizeof(unsigned long) string names (and hell even > sizeof(unsigned
long) string names that I've tested). I broke out the old hashing
function and the new one into a userspace helper to get real numbers
and this is what I'm getting:
Old hash table had
1000000 entries, 0 dupes, 0 max dupes
New hash table had 12628 entries, 987372 dupes, 900 max dupes
We had 11400 buckets with a p50 of 30 dupes, p90 of 240 dupes, p99 of 567 dupes for the new hash
My test does the hash, and then does the d_hash into a integer pointer
array the same size as the dentry hash table on my system, and then
just increments the value at the address we got to see how many
entries we overlap with.
As you can see the old hash function ended up with all 1 million
entries in their own bucket, whereas the new one they are only
distributed among ~12.5k buckets, which is why we're using so much
more CPU in __d_lookup".
The reason for this hash regression is two-fold:
- On 64-bit architectures the down-mixing of the original 64-bit
word-at-a-time hash into the final 32-bit hash value is very
simplistic and suboptimal, and just adds the two 32-bit parts
together.
In particular, because there is no bit shuffling and the mixing
boundary is also a byte boundary, similar character patterns in the
low and high word easily end up just canceling each other out.
- the old byte-at-a-time hash mixed each byte into the final hash as it
hashed the path component name, resulting in the low bits of the hash
generally being a good source of hash data. That is not true for the
word-at-a-time case, and the hash data is distributed among all the
bits.
The fix is the same in both cases: do a better job of mixing the bits up
and using as much of the hash data as possible. We already have the
"hash_32|64()" functions to do that.
Reported-by: Josef Bacik <jbacik@fb.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Chris Mason <clm@fb.com>
Cc: linux-fsdevel@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Al Viro [Fri, 25 Oct 2013 20:41:01 +0000 (16:41 -0400)]
dcache.c: get rid of pointless macros
commit
482db9066199813d6b999b65a3171afdbec040b6 upstream.
D_HASH{MASK,BITS} are used once each, both in the same function (d_hash()).
At this point they are actively misguiding - they imply that values are
compiler constants, which is no longer true.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bart Van Assche [Wed, 9 Jul 2014 13:57:26 +0000 (15:57 +0200)]
IB/srp: Fix deadlock between host removal and multipathd
commit
bcc05910359183b431da92713e98eed478edf83a upstream.
If scsi_remove_host() is invoked after a SCSI device has been blocked,
if the fast_io_fail_tmo or dev_loss_tmo work gets scheduled on the
workqueue executing srp_remove_work() and if an I/O request is
scheduled after the SCSI device had been blocked by e.g. multipathd
then the following deadlock can occur:
kworker/6:1 D
ffff880831f3c460 0 195 2 0x00000000
Call Trace:
[<
ffffffff814aafd9>] schedule+0x29/0x70
[<
ffffffff814aa0ef>] schedule_timeout+0x10f/0x2a0
[<
ffffffff8105af6f>] msleep+0x2f/0x40
[<
ffffffff8123b0ae>] __blk_drain_queue+0x4e/0x180
[<
ffffffff8123d2d5>] blk_cleanup_queue+0x225/0x230
[<
ffffffffa0010732>] __scsi_remove_device+0x62/0xe0 [scsi_mod]
[<
ffffffffa000ed2f>] scsi_forget_host+0x6f/0x80 [scsi_mod]
[<
ffffffffa0002eba>] scsi_remove_host+0x7a/0x130 [scsi_mod]
[<
ffffffffa07cf5c5>] srp_remove_work+0x95/0x180 [ib_srp]
[<
ffffffff8106d7aa>] process_one_work+0x1ea/0x6c0
[<
ffffffff8106dd9b>] worker_thread+0x11b/0x3a0
[<
ffffffff810758bd>] kthread+0xed/0x110
[<
ffffffff814b972c>] ret_from_fork+0x7c/0xb0
multipathd D
ffff880096acc460 0 5340 1 0x00000000
Call Trace:
[<
ffffffff814aafd9>] schedule+0x29/0x70
[<
ffffffff814aa0ef>] schedule_timeout+0x10f/0x2a0
[<
ffffffff814ab79b>] io_schedule_timeout+0x9b/0xf0
[<
ffffffff814abe1c>] wait_for_completion_io_timeout+0xdc/0x110
[<
ffffffff81244b9b>] blk_execute_rq+0x9b/0x100
[<
ffffffff8124f665>] sg_io+0x1a5/0x450
[<
ffffffff8124fd21>] scsi_cmd_ioctl+0x2a1/0x430
[<
ffffffff8124fef2>] scsi_cmd_blk_ioctl+0x42/0x50
[<
ffffffffa00ec97e>] sd_ioctl+0xbe/0x140 [sd_mod]
[<
ffffffff8124bd04>] blkdev_ioctl+0x234/0x840
[<
ffffffff811cb491>] block_ioctl+0x41/0x50
[<
ffffffff811a0df0>] do_vfs_ioctl+0x300/0x520
[<
ffffffff811a1051>] SyS_ioctl+0x41/0x80
[<
ffffffff814b9962>] tracesys+0xd0/0xd5
Fix this by scheduling removal work on another workqueue than the
transport layer timers.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Reviewed-by: David Dillow <dave@thedillows.org>
Cc: Sebastian Parschauer <sebastian.riemer@profitbricks.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tejun Heo [Sat, 5 Jul 2014 22:43:21 +0000 (18:43 -0400)]
blkcg: don't call into policy draining if root_blkg is already gone
commit
2a1b4cf2331d92bc009bf94fa02a24604cdaf24c upstream.
While a queue is being destroyed, all the blkgs are destroyed and its
->root_blkg pointer is set to NULL. If someone else starts to drain
while the queue is in this state, the following oops happens.
NULL pointer dereference at
0000000000000028
IP: [<
ffffffff8144e944>] blk_throtl_drain+0x84/0x230
PGD
e4a1067 PUD
b773067 PMD 0
Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
Modules linked in: cfq_iosched(-) [last unloaded: cfq_iosched]
CPU: 1 PID: 537 Comm: bash Not tainted 3.16.0-rc3-work+ #2
Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
task:
ffff88000e222250 ti:
ffff88000efd4000 task.ti:
ffff88000efd4000
RIP: 0010:[<
ffffffff8144e944>] [<
ffffffff8144e944>] blk_throtl_drain+0x84/0x230
RSP: 0018:
ffff88000efd7bf0 EFLAGS:
00010046
RAX:
0000000000000000 RBX:
ffff880015091450 RCX:
0000000000000001
RDX:
0000000000000000 RSI:
0000000000000000 RDI:
0000000000000000
RBP:
ffff88000efd7c10 R08:
0000000000000000 R09:
0000000000000001
R10:
ffff88000e222250 R11:
0000000000000000 R12:
ffff880015091450
R13:
ffff880015092e00 R14:
ffff880015091d70 R15:
ffff88001508fc28
FS:
00007f1332650740(0000) GS:
ffff88001fa80000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
000000008005003b
CR2:
0000000000000028 CR3:
0000000009446000 CR4:
00000000000006e0
Stack:
ffffffff8144e8f6 ffff880015091450 0000000000000000 ffff880015091d80
ffff88000efd7c28 ffffffff8144ae2f ffff880015091450 ffff88000efd7c58
ffffffff81427641 ffff880015091450 ffffffff82401f00 ffff880015091450
Call Trace:
[<
ffffffff8144ae2f>] blkcg_drain_queue+0x1f/0x60
[<
ffffffff81427641>] __blk_drain_queue+0x71/0x180
[<
ffffffff81429b3e>] blk_queue_bypass_start+0x6e/0xb0
[<
ffffffff814498b8>] blkcg_deactivate_policy+0x38/0x120
[<
ffffffff8144ec44>] blk_throtl_exit+0x34/0x50
[<
ffffffff8144aea5>] blkcg_exit_queue+0x35/0x40
[<
ffffffff8142d476>] blk_release_queue+0x26/0xd0
[<
ffffffff81454968>] kobject_cleanup+0x38/0x70
[<
ffffffff81454848>] kobject_put+0x28/0x60
[<
ffffffff81427505>] blk_put_queue+0x15/0x20
[<
ffffffff817d07bb>] scsi_device_dev_release_usercontext+0x16b/0x1c0
[<
ffffffff810bc339>] execute_in_process_context+0x89/0xa0
[<
ffffffff817d064c>] scsi_device_dev_release+0x1c/0x20
[<
ffffffff817930e2>] device_release+0x32/0xa0
[<
ffffffff81454968>] kobject_cleanup+0x38/0x70
[<
ffffffff81454848>] kobject_put+0x28/0x60
[<
ffffffff817934d7>] put_device+0x17/0x20
[<
ffffffff817d11b9>] __scsi_remove_device+0xa9/0xe0
[<
ffffffff817d121b>] scsi_remove_device+0x2b/0x40
[<
ffffffff817d1257>] sdev_store_delete+0x27/0x30
[<
ffffffff81792ca8>] dev_attr_store+0x18/0x30
[<
ffffffff8126f75e>] sysfs_kf_write+0x3e/0x50
[<
ffffffff8126ea87>] kernfs_fop_write+0xe7/0x170
[<
ffffffff811f5e9f>] vfs_write+0xaf/0x1d0
[<
ffffffff811f69bd>] SyS_write+0x4d/0xc0
[<
ffffffff81d24692>] system_call_fastpath+0x16/0x1b
776687bce42b ("block, blk-mq: draining can't be skipped even if
bypass_depth was non-zero") made it easier to trigger this bug by
making blk_queue_bypass_start() drain even when it loses the first
bypass test to blk_cleanup_queue(); however, the bug has always been
there even before the commit as blk_queue_bypass_start() could race
against queue destruction, win the initial bypass test but perform the
actual draining after blk_cleanup_queue() already destroyed all blkgs.
Fix it by skippping calling into policy draining if all the blkgs are
already gone.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Shirish Pargaonkar <spargaonkar@suse.com>
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Reported-by: Jet Chen <jet.chen@intel.com>
Tested-by: Shirish Pargaonkar <spargaonkar@suse.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Roger Quadros [Mon, 25 Aug 2014 23:15:33 +0000 (16:15 -0700)]
mtd: nand: omap: Fix 1-bit Hamming code scheme, omap_calculate_ecc()
commit
40ddbf5069bd4e11447c0088fc75318e0aac53f0 upstream.
commit
65b97cf6b8de introduced in v3.7 caused a regression
by using a reversed CS_MASK thus causing omap_calculate_ecc to
always fail. As the NAND base driver never checks for .calculate()'s
return value, the zeroed ECC values are used as is without showing
any error to the user. However, this won't work and the NAND device
won't be guarded by any error code.
Fix the issue by using the correct mask.
Code was tested on omap3beagle using the following procedure
- flash the primary bootloader (MLO) from the kernel to the first
NAND partition using nandwrite.
- boot the board from NAND. This utilizes OMAP ROM loader that
relies on 1-bit Hamming code ECC.
Fixes: 65b97cf6b8de (mtd: nand: omap2: handle nand on gpmc)
Signed-off-by: Roger Quadros <rogerq@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kevin Hao [Thu, 3 Jul 2014 02:35:26 +0000 (10:35 +0800)]
mtd/ftl: fix the double free of the buffers allocated in build_maps()
commit
a152056c912db82860a8b4c23d0bd3a5aa89e363 upstream.
I got the following panic on my fsl p5020ds board.
Unable to handle kernel paging request for data at address 0x7375627379737465
Faulting instruction address: 0xc000000000100778
Oops: Kernel access of bad area, sig: 11 [#1]
SMP NR_CPUS=24 CoreNet Generic
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.15.0-next-
20140613 #145
task:
c0000000fe080000 ti:
c0000000fe088000 task.ti:
c0000000fe088000
NIP:
c000000000100778 LR:
c00000000010073c CTR:
0000000000000000
REGS:
c0000000fe08aa00 TRAP: 0300 Not tainted (3.15.0-next-
20140613)
MSR:
0000000080029000 <CE,EE,ME> CR:
24ad2e24 XER:
00000000
DEAR:
7375627379737465 ESR:
0000000000000000 SOFTE: 1
GPR00:
c0000000000c99b0 c0000000fe08ac80 c0000000009598e0 c0000000fe001d80
GPR04:
00000000000000d0 0000000000000913 c000000007902b20 0000000000000000
GPR08:
c0000000feaae888 0000000000000000 0000000007091000 0000000000200200
GPR12:
0000000028ad2e28 c00000000fff4000 c0000000007abe08 0000000000000000
GPR16:
c0000000007ab160 c0000000007aaf98 c00000000060ba68 c0000000007abda8
GPR20:
c0000000007abde8 c0000000feaea6f8 c0000000feaea708 c0000000007abd10
GPR24:
c000000000989370 c0000000008c6228 00000000000041ed c0000000fe00a400
GPR28:
c00000000017c1cc 00000000000000d0 7375627379737465 c0000000fe001d80
NIP [
c000000000100778] .__kmalloc_track_caller+0x70/0x168
LR [
c00000000010073c] .__kmalloc_track_caller+0x34/0x168
Call Trace:
[
c0000000fe08ac80] [
c00000000087e6b8] uevent_sock_list+0x0/0x10 (unreliable)
[
c0000000fe08ad20] [
c0000000000c99b0] .kstrdup+0x44/0x90
[
c0000000fe08adc0] [
c00000000017c1cc] .__kernfs_new_node+0x4c/0x130
[
c0000000fe08ae70] [
c00000000017d7e4] .kernfs_new_node+0x2c/0x64
[
c0000000fe08aef0] [
c00000000017db00] .kernfs_create_dir_ns+0x34/0xc8
[
c0000000fe08af80] [
c00000000018067c] .sysfs_create_dir_ns+0x58/0xcc
[
c0000000fe08b010] [
c0000000002c711c] .kobject_add_internal+0xc8/0x384
[
c0000000fe08b0b0] [
c0000000002c7644] .kobject_add+0x64/0xc8
[
c0000000fe08b140] [
c000000000355ebc] .device_add+0x11c/0x654
[
c0000000fe08b200] [
c0000000002b5988] .add_disk+0x20c/0x4b4
[
c0000000fe08b2c0] [
c0000000003a21d4] .add_mtd_blktrans_dev+0x340/0x514
[
c0000000fe08b350] [
c0000000003a3410] .mtdblock_add_mtd+0x74/0xb4
[
c0000000fe08b3e0] [
c0000000003a32cc] .blktrans_notify_add+0x64/0x94
[
c0000000fe08b470] [
c00000000039b5b4] .add_mtd_device+0x1d4/0x368
[
c0000000fe08b520] [
c00000000039b830] .mtd_device_parse_register+0xe8/0x104
[
c0000000fe08b5c0] [
c0000000003b8408] .of_flash_probe+0x72c/0x734
[
c0000000fe08b750] [
c00000000035ba40] .platform_drv_probe+0x38/0x84
[
c0000000fe08b7d0] [
c0000000003599a4] .really_probe+0xa4/0x29c
[
c0000000fe08b870] [
c000000000359d3c] .__driver_attach+0x100/0x104
[
c0000000fe08b900] [
c00000000035746c] .bus_for_each_dev+0x84/0xe4
[
c0000000fe08b9a0] [
c0000000003593c0] .driver_attach+0x24/0x38
[
c0000000fe08ba10] [
c000000000358f24] .bus_add_driver+0x1c8/0x2ac
[
c0000000fe08bab0] [
c00000000035a3a4] .driver_register+0x8c/0x158
[
c0000000fe08bb30] [
c00000000035b9f4] .__platform_driver_register+0x6c/0x80
[
c0000000fe08bba0] [
c00000000084e080] .of_flash_driver_init+0x1c/0x30
[
c0000000fe08bc10] [
c000000000001864] .do_one_initcall+0xbc/0x238
[
c0000000fe08bd00] [
c00000000082cdc0] .kernel_init_freeable+0x188/0x268
[
c0000000fe08bdb0] [
c0000000000020a0] .kernel_init+0x1c/0xf7c
[
c0000000fe08be30] [
c000000000000884] .ret_from_kernel_thread+0x58/0xd4
Instruction dump:
41bd0010 480000c8 4bf04eb5 60000000 e94d0028 e93f0000 7cc95214 e8a60008
7fc9502a 2fbe0000 419e00c8 e93f0022 <
7f7e482a>
39200000 88ed06b2 992d06b2
---[ end trace
b4c9a94804a42d40 ]---
It seems that the corrupted partition header on my mtd device triggers
a bug in the ftl. In function build_maps() it will allocate the buffers
needed by the mtd partition, but if something goes wrong such as kmalloc
failure, mtd read error or invalid partition header parameter, it will
free all allocated buffers and then return non-zero. In my case, it
seems that partition header parameter 'NumTransferUnits' is invalid.
And the ftl_freepart() is a function which free all the partition
buffers allocated by build_maps(). Given the build_maps() is a self
cleaning function, so there is no need to invoke this function even
if build_maps() return with error. Otherwise it will causes the
buffers to be freed twice and then weird things would happen.
Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pavel Shilovsky [Tue, 26 Aug 2014 15:04:44 +0000 (19:04 +0400)]
CIFS: Fix wrong restart readdir for SMB1
commit
f736906a7669a77cf8cabdcbcf1dc8cb694e12ef upstream.
The existing code calls server->ops->close() that is not
right. This causes XFS test generic/310 to fail. Fix this
by using server->ops->closedir() function.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pavel Shilovsky [Fri, 22 Aug 2014 09:32:11 +0000 (13:32 +0400)]
CIFS: Fix wrong filename length for SMB2
commit
1bbe4997b13de903c421c1cc78440e544b5f9064 upstream.
The existing code uses the old MAX_NAME constant. This causes
XFS test generic/013 to fail. Fix it by replacing MAX_NAME with
PATH_MAX that SMB1 uses. Also remove an unused MAX_NAME constant
definition.
Signed-off-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pavel Shilovsky [Mon, 18 Aug 2014 16:49:58 +0000 (20:49 +0400)]
CIFS: Fix wrong directory attributes after rename
commit
b46799a8f28c43c5264ac8d8ffa28b311b557e03 upstream.
When we requests rename we also need to update attributes
of both source and target parent directories. Not doing it
causes generic/309 xfstest to fail on SMB2 mounts. Fix this
by marking these directories for force revalidating.
Signed-off-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Steve French [Sun, 17 Aug 2014 05:22:24 +0000 (00:22 -0500)]
CIFS: Possible null ptr deref in SMB2_tcon
commit
18f39e7be0121317550d03e267e3ebd4dbfbb3ce upstream.
As Raphael Geissert pointed out, tcon_error_exit can dereference tcon
and there is one path in which tcon can be null.
Signed-off-by: Steve French <smfrench@gmail.com>
Reported-by: Raphael Geissert <geissert@debian.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pavel Shilovsky [Fri, 27 Jun 2014 06:33:11 +0000 (10:33 +0400)]
CIFS: Fix async reading on reconnects
commit
038bc961c31b070269ecd07349a7ee2e839d4fec upstream.
If we get into read_into_pages() from cifs_readv_receive() and then
loose a network, we issue cifs_reconnect that moves all mids to
a private list and issue their callbacks. The callback of the async
read request sets a mid to retry, frees it and wakes up a process
that waits on the rdata completion.
After the connection is established we return from read_into_pages()
with a short read, use the mid that was freed before and try to read
the remaining data from the a newly created socket. Both actions are
not what we want to do. In reconnect cases (-EAGAIN) we should not
mask off the error with a short read but should return the error
code instead.
Acked-by: Jeff Layton <jlayton@samba.org>
Signed-off-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pavel Shilovsky [Fri, 18 Jul 2014 14:25:52 +0000 (18:25 +0400)]
CIFS: Fix STATUS_CANNOT_DELETE error mapping for SMB2
commit
21496687a79424572f46a84c690d331055f4866f upstream.
The existing mapping causes unlink() call to return error after delete
operation. Changing the mapping to -EACCES makes the client process
the call like CIFS protocol does - reset dos attributes with ATTR_READONLY
flag masked off and retry the operation.
Signed-off-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ilya Dryomov [Tue, 9 Sep 2014 15:39:15 +0000 (19:39 +0400)]
libceph: do not hard code max auth ticket len
commit
c27a3e4d667fdcad3db7b104f75659478e0c68d8 upstream.
We hard code cephx auth ticket buffer size to 256 bytes. This isn't
enough for any moderate setups and, in case tickets themselves are not
encrypted, leads to buffer overflows (ceph_x_decrypt() errors out, but
ceph_decode_copy() doesn't - it's just a memcpy() wrapper). Since the
buffer is allocated dynamically anyway, allocated it a bit later, at
the point where we know how much is going to be needed.
Fixes: http://tracker.ceph.com/issues/8979
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ilya Dryomov [Mon, 8 Sep 2014 13:25:34 +0000 (17:25 +0400)]
libceph: add process_one_ticket() helper
commit
597cda357716a3cf8d994cb11927af917c8d71fa upstream.
Add a helper for processing individual cephx auth tickets. Needed for
the next commit, which deals with allocating ticket buffers. (Most of
the diff here is whitespace - view with git diff -b).
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ilya Dryomov [Fri, 8 Aug 2014 08:43:39 +0000 (12:43 +0400)]
libceph: set last_piece in ceph_msg_data_pages_cursor_init() correctly
commit
5f740d7e1531099b888410e6bab13f68da9b1a4d upstream.
Determining ->last_piece based on the value of ->page_offset + length
is incorrect because length here is the length of the entire message.
->last_piece set to false even if page array data item length is <=
PAGE_SIZE, which results in invalid length passed to
ceph_tcp_{send,recv}page() and causes various asserts to fire.
# cat pages-cursor-init.sh
#!/bin/bash
rbd create --size 10 --image-format 2 foo
FOO_DEV=$(rbd map foo)
dd if=/dev/urandom of=$FOO_DEV bs=1M &>/dev/null
rbd snap create foo@snap
rbd snap protect foo@snap
rbd clone foo@snap bar
# rbd_resize calls librbd rbd_resize(), size is in bytes
./rbd_resize bar $(((4 << 20) + 512))
rbd resize --size 10 bar
BAR_DEV=$(rbd map bar)
# trigger a 512-byte copyup -- 512-byte page array data item
dd if=/dev/urandom of=$BAR_DEV bs=1M count=1 seek=5
The problem exists only in ceph_msg_data_pages_cursor_init(),
ceph_msg_data_pages_advance() does the right thing. The size_t cast is
unnecessary.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Reviewed-by: Alex Elder <elder@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
NeilBrown [Thu, 31 Jul 2014 00:16:29 +0000 (10:16 +1000)]
md/raid1,raid10: always abort recover on write error.
commit
2446dba03f9dabe0b477a126cbeb377854785b47 upstream.
Currently we don't abort recovery on a write error if the write error
to the recovering device was triggerd by normal IO (as opposed to
recovery IO).
This means that for one bitmap region, the recovery might write to the
recovering device for a few sectors, then not bother for subsequent
sectors (as it never writes to failed devices). In this case
the bitmap bit will be cleared, but it really shouldn't.
The result is that if the recovering device fails and is then re-added
(after fixing whatever hardware problem triggerred the failure),
the second recovery won't redo the region it was in the middle of,
so some of the device will not be recovered properly.
If we abort the recovery, the region being processes will be cancelled
(bit not cleared) and the whole region will be retried.
As the bug can result in data corruption the patch is suitable for
-stable. For kernels prior to 3.11 there is a conflict in raid10.c
which will require care.
Original-from: jiao hui <jiaohui@bwstor.com.cn>
Reported-and-tested-by: jiao hui <jiaohui@bwstor.com.cn>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Chris Mason [Tue, 2 Sep 2014 02:12:52 +0000 (12:12 +1000)]
xfs: don't zero partial page cache pages during O_DIRECT writes
commit
85e584da3212140ee80fd047f9058bbee0bc00d5 upstream.
xfs is using truncate_pagecache_range to invalidate the page cache
during DIO reads. This is different from the other filesystems who
only invalidate pages during DIO writes.
truncate_pagecache_range is meant to be used when we are freeing the
underlying data structs from disk, so it will zero any partial
ranges in the page. This means a DIO read can zero out part of the
page cache page, and it is possible the page will stay in cache.
buffered reads will find an up to date page with zeros instead of
the data actually on disk.
This patch fixes things by using invalidate_inode_pages2_range
instead. It preserves the page cache invalidation, but won't zero
any pages.
[dchinner: catch error and warn if it fails. Comment.]
Signed-off-by: Chris Mason <clm@fb.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dave Chinner [Tue, 2 Sep 2014 02:12:52 +0000 (12:12 +1000)]
xfs: don't zero partial page cache pages during O_DIRECT writes
commit
834ffca6f7e345a79f6f2e2d131b0dfba8a4b67a upstream.
Similar to direct IO reads, direct IO writes are using
truncate_pagecache_range to invalidate the page cache. This is
incorrect due to the sub-block zeroing in the page cache that
truncate_pagecache_range() triggers.
This patch fixes things by using invalidate_inode_pages2_range
instead. It preserves the page cache invalidation, but won't zero
any pages.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dave Chinner [Tue, 2 Sep 2014 02:12:51 +0000 (12:12 +1000)]
xfs: don't dirty buffers beyond EOF
commit
22e757a49cf010703fcb9c9b4ef793248c39b0c2 upstream.
generic/263 is failing fsx at this point with a page spanning
EOF that cannot be invalidated. The operations are:
1190 mapwrite 0x52c00 thru 0x5e569 (0xb96a bytes)
1191 mapread 0x5c000 thru 0x5d636 (0x1637 bytes)
1192 write 0x5b600 thru 0x771ff (0x1bc00 bytes)
where 1190 extents EOF from 0x54000 to 0x5e569. When the direct IO
write attempts to invalidate the cached page over this range, it
fails with -EBUSY and so any attempt to do page invalidation fails.
The real question is this: Why can't that page be invalidated after
it has been written to disk and cleaned?
Well, there's data on the first two buffers in the page (1k block
size, 4k page), but the third buffer on the page (i.e. beyond EOF)
is failing drop_buffers because it's bh->b_state == 0x3, which is
BH_Uptodate | BH_Dirty. IOWs, there's dirty buffers beyond EOF. Say
what?
OK, set_buffer_dirty() is called on all buffers from
__set_page_buffers_dirty(), regardless of whether the buffer is
beyond EOF or not, which means that when we get to ->writepage,
we have buffers marked dirty beyond EOF that we need to clean.
So, we need to implement our own .set_page_dirty method that
doesn't dirty buffers beyond EOF.
This is messy because the buffer code is not meant to be shared
and it has interesting locking issues on the buffer dirty bits.
So just copy and paste it and then modify it to suit what we need.
Note: the solutions the other filesystems and generic block code use
of marking the buffers clean in ->writepage does not work for XFS.
It still leaves dirty buffers beyond EOF and invalidations still
fail. Hence rather than play whack-a-mole, this patch simply
prevents those buffers from being dirtied in the first place.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dave Chinner [Mon, 4 Aug 2014 02:43:26 +0000 (12:43 +1000)]
xfs: quotacheck leaves dquot buffers without verifiers
commit
5fd364fee81a7888af806e42ed8a91c845894f2d upstream.
When running xfs/305, I noticed that quotacheck was flushing dquot
buffers that did not have the xfs_dquot_buf_ops verifiers attached:
XFS (vdb): _xfs_buf_ioapply: no ops on block 0x1dc8/0x1dc8
ffff880052489000: 44 51 01 04 00 00 65 b8 00 00 00 00 00 00 00 00 DQ....e.........
ffff880052489010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
ffff880052489020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
ffff880052489030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
CPU: 1 PID: 2376 Comm: mount Not tainted 3.16.0-rc2-dgc+ #306
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
ffff88006fe38000 ffff88004a0ffae8 ffffffff81cf1cca 0000000000000001
ffff88004a0ffb88 ffffffff814d50ca 000010004a0ffc70 0000000000000000
ffff88006be56dc4 0000000000000021 0000000000001dc8 ffff88007c773d80
Call Trace:
[<
ffffffff81cf1cca>] dump_stack+0x45/0x56
[<
ffffffff814d50ca>] _xfs_buf_ioapply+0x3ca/0x3d0
[<
ffffffff810db520>] ? wake_up_state+0x20/0x20
[<
ffffffff814d51f5>] ? xfs_bdstrat_cb+0x55/0xb0
[<
ffffffff814d513b>] xfs_buf_iorequest+0x6b/0xd0
[<
ffffffff814d51f5>] xfs_bdstrat_cb+0x55/0xb0
[<
ffffffff814d53ab>] __xfs_buf_delwri_submit+0x15b/0x220
[<
ffffffff814d6040>] ? xfs_buf_delwri_submit+0x30/0x90
[<
ffffffff814d6040>] xfs_buf_delwri_submit+0x30/0x90
[<
ffffffff8150f89d>] xfs_qm_quotacheck+0x17d/0x3c0
[<
ffffffff81510591>] xfs_qm_mount_quotas+0x151/0x1e0
[<
ffffffff814ed01c>] xfs_mountfs+0x56c/0x7d0
[<
ffffffff814f0f12>] xfs_fs_fill_super+0x2c2/0x340
[<
ffffffff811c9fe4>] mount_bdev+0x194/0x1d0
[<
ffffffff814f0c50>] ? xfs_finish_flags+0x170/0x170
[<
ffffffff814ef0f5>] xfs_fs_mount+0x15/0x20
[<
ffffffff811ca8c9>] mount_fs+0x39/0x1b0
[<
ffffffff811e4d67>] vfs_kern_mount+0x67/0x120
[<
ffffffff811e757e>] do_mount+0x23e/0xad0
[<
ffffffff8117abde>] ? __get_free_pages+0xe/0x50
[<
ffffffff811e71e6>] ? copy_mount_options+0x36/0x150
[<
ffffffff811e8103>] SyS_mount+0x83/0xc0
[<
ffffffff81cfd40b>] tracesys+0xdd/0xe2
This was caused by dquot buffer readahead not attaching a verifier
structure to the buffer when readahead was issued, resulting in the
followup read of the buffer finding a valid buffer and so not
attaching new verifiers to the buffer as part of the read.
Also, when a verifier failure occurs, we then read the buffer
without verifiers. Attach the verifiers manually after this read so
that if the buffer is then written it will be verified that the
corruption has been repaired.
Further, when flushing a dquot we don't ask for a verifier when
reading in the dquot buffer the dquot belongs to. Most of the time
this isn't an issue because the buffer is still cached, but when it
is not cached it will result in writing the dquot buffer without
having the verfier attached.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Steve Wise [Fri, 25 Jul 2014 14:11:33 +0000 (09:11 -0500)]
RDMA/iwcm: Use a default listen backlog if needed
commit
2f0304d21867476394cd51a54e97f7273d112261 upstream.
If the user creates a listening cm_id with backlog of 0 the IWCM ends
up not allowing any connection requests at all. The correct behavior
is for the IWCM to pick a default value if the user backlog parameter
is zero.
Lustre from version 1.8.8 onward uses a backlog of 0, which breaks
iwarp support without this fix.
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
NeilBrown [Mon, 18 Aug 2014 03:59:50 +0000 (13:59 +1000)]
md/raid10: Fix memory leak when raid10 reshape completes.
commit
b39685526f46976bcd13aa08c82480092befa46c upstream.
When a raid10 commences a resync/recovery/reshape it allocates
some buffer space.
When a resync/recovery completes the buffer space is freed. But not
when the reshape completes.
This can result in a small memory leak.
There is a subtle side-effect of this bug. When a RAID10 is reshaped
to a larger array (more devices), the reshape is immediately followed
by a "resync" of the new space. This "resync" will use the buffer
space which was allocated for "reshape". This can cause problems
including a "BUG" in the SCSI layer. So this is suitable for -stable.
Fixes: 3ea7daa5d7fde47cd41f4d56c2deb949114da9d6
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
NeilBrown [Mon, 18 Aug 2014 03:56:38 +0000 (13:56 +1000)]
md/raid10: fix memory leak when reshaping a RAID10.
commit
ce0b0a46955d1bb389684a2605dbcaa990ba0154 upstream.
raid10 reshape clears unwanted bits from a bio->bi_flags using
a method which, while clumsy, worked until 3.10 when BIO_OWNS_VEC
was added.
Since then it clears that bit but shouldn't. This results in a
memory leak.
So change to used the approved method of clearing unwanted bits.
As this causes a memory leak which can consume all of memory
the fix is suitable for -stable.
Fixes: a38352e0ac02dbbd4fa464dc22d1352b5fbd06fd
Reported-by: mdraid.pkoch@dfgh.net (Peter Koch)
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
NeilBrown [Tue, 12 Aug 2014 23:57:07 +0000 (09:57 +1000)]
md/raid6: avoid data corruption during recovery of double-degraded RAID6
commit
9c4bdf697c39805078392d5ddbbba5ae5680e0dd upstream.
During recovery of a double-degraded RAID6 it is possible for
some blocks not to be recovered properly, leading to corruption.
If a write happens to one block in a stripe that would be written to a
missing device, and at the same time that stripe is recovering data
to the other missing device, then that recovered data may not be written.
This patch skips, in the double-degraded case, an optimisation that is
only safe for single-degraded arrays.
Bug was introduced in 2.6.32 and fix is suitable for any kernel since
then. In an older kernel with separate handle_stripe5() and
handle_stripe6() functions the patch must change handle_stripe6().
Fixes: 6c0069c0ae9659e3a91b68eaed06a5c6c37f45c8
Cc: Yuri Tikhonov <yur@emcraft.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Reported-by: "Manibalan P" <pmanibalan@amiindia.co.in>
Tested-by: "Manibalan P" <pmanibalan@amiindia.co.in>
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=
1090423
Signed-off-by: NeilBrown <neilb@suse.de>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vignesh Raman [Tue, 22 Jul 2014 13:54:25 +0000 (19:24 +0530)]
Bluetooth: Avoid use of session socket after the session gets freed
commit
32333edb82fb2009980eefc5518100068147ab82 upstream.
The commits
08c30aca9e698faddebd34f81e1196295f9dc063 "Bluetooth: Remove
RFCOMM session refcnt" and
8ff52f7d04d9cc31f1e81dcf9a2ba6335ed34905
"Bluetooth: Return RFCOMM session ptrs to avoid freed session"
allow rfcomm_recv_ua and rfcomm_session_close to delete the session
(and free the corresponding socket) and propagate NULL session pointer
to the upper callers.
Additional fix is required to terminate the loop in rfcomm_process_rx
function to avoid use of freed 'sk' memory.
The issue is only reproducible with kernel option CONFIG_PAGE_POISONING
enabled making freed memory being changed and filled up with fixed char
value used to unmask use-after-free issues.
Signed-off-by: Vignesh Raman <Vignesh_Raman@mentor.com>
Signed-off-by: Vitaly Kuzmichev <Vitaly_Kuzmichev@mentor.com>
Acked-by: Dean Jenkins <Dean_Jenkins@mentor.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vladimir Davydov [Tue, 15 Jul 2014 08:25:28 +0000 (12:25 +0400)]
Bluetooth: never linger on process exit
commit
093facf3634da1b0c2cc7ed106f1983da901bbab upstream.
If the current process is exiting, lingering on socket close will make
it unkillable, so we should avoid it.
Reproducer:
#include <sys/types.h>
#include <sys/socket.h>
#define BTPROTO_L2CAP 0
#define BTPROTO_SCO 2
#define BTPROTO_RFCOMM 3
int main()
{
int fd;
struct linger ling;
fd = socket(PF_BLUETOOTH, SOCK_STREAM, BTPROTO_RFCOMM);
//or: fd = socket(PF_BLUETOOTH, SOCK_DGRAM, BTPROTO_L2CAP);
//or: fd = socket(PF_BLUETOOTH, SOCK_SEQPACKET, BTPROTO_SCO);
ling.l_onoff = 1;
ling.l_linger =
1000000000;
setsockopt(fd, SOL_SOCKET, SO_LINGER, &ling, sizeof(ling));
return 0;
}
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Tue, 29 Jul 2014 22:50:44 +0000 (15:50 -0700)]
mnt: Add tests for unprivileged remount cases that have found to be faulty
commit
db181ce011e3c033328608299cd6fac06ea50130 upstream.
Kenton Varda <kenton@sandstorm.io> discovered that by remounting a
read-only bind mount read-only in a user namespace the
MNT_LOCK_READONLY bit would be cleared, allowing an unprivileged user
to the remount a read-only mount read-write.
Upon review of the code in remount it was discovered that the code allowed
nosuid, noexec, and nodev to be cleared. It was also discovered that
the code was allowing the per mount atime flags to be changed.
The first naive patch to fix these issues contained the flaw that using
default atime settings when remounting a filesystem could be disallowed.
To avoid this problems in the future add tests to ensure unprivileged
remounts are succeeding and failing at the appropriate times.
Acked-by: Serge E. Hallyn <serge.hallyn@ubuntu.com>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Tue, 29 Jul 2014 00:36:04 +0000 (17:36 -0700)]
mnt: Change the default remount atime from relatime to the existing value
commit
ffbc6f0ead47fa5a1dc9642b0331cb75c20a640e upstream.
Since March 2009 the kernel has treated the state that if no
MS_..ATIME flags are passed then the kernel defaults to relatime.
Defaulting to relatime instead of the existing atime state during a
remount is silly, and causes problems in practice for people who don't
specify any MS_...ATIME flags and to get the default filesystem atime
setting. Those users may encounter a permission error because the
default atime setting does not work.
A default that does not work and causes permission problems is
ridiculous, so preserve the existing value to have a default
atime setting that is always guaranteed to work.
Using the default atime setting in this way is particularly
interesting for applications built to run in restricted userspace
environments without /proc mounted, as the existing atime mount
options of a filesystem can not be read from /proc/mounts.
In practice this fixes user space that uses the default atime
setting on remount that are broken by the permission checks
keeping less privileged users from changing more privileged users
atime settings.
Acked-by: Serge E. Hallyn <serge.hallyn@ubuntu.com>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Tue, 29 Jul 2014 00:26:07 +0000 (17:26 -0700)]
mnt: Correct permission checks in do_remount
commit
9566d6742852c527bf5af38af5cbb878dad75705 upstream.
While invesgiating the issue where in "mount --bind -oremount,ro ..."
would result in later "mount --bind -oremount,rw" succeeding even if
the mount started off locked I realized that there are several
additional mount flags that should be locked and are not.
In particular MNT_NOSUID, MNT_NODEV, MNT_NOEXEC, and the atime
flags in addition to MNT_READONLY should all be locked. These
flags are all per superblock, can all be changed with MS_BIND,
and should not be changable if set by a more privileged user.
The following additions to the current logic are added in this patch.
- nosuid may not be clearable by a less privileged user.
- nodev may not be clearable by a less privielged user.
- noexec may not be clearable by a less privileged user.
- atime flags may not be changeable by a less privileged user.
The logic with atime is that always setting atime on access is a
global policy and backup software and auditing software could break if
atime bits are not updated (when they are configured to be updated),
and serious performance degradation could result (DOS attack) if atime
updates happen when they have been explicitly disabled. Therefore an
unprivileged user should not be able to mess with the atime bits set
by a more privileged user.
The additional restrictions are implemented with the addition of
MNT_LOCK_NOSUID, MNT_LOCK_NODEV, MNT_LOCK_NOEXEC, and MNT_LOCK_ATIME
mnt flags.
Taken together these changes and the fixes for MNT_LOCK_READONLY
should make it safe for an unprivileged user to create a user
namespace and to call "mount --bind -o remount,... ..." without
the danger of mount flags being changed maliciously.
Acked-by: Serge E. Hallyn <serge.hallyn@ubuntu.com>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Tue, 29 Jul 2014 00:10:56 +0000 (17:10 -0700)]
mnt: Move the test for MNT_LOCK_READONLY from change_mount_flags into do_remount
commit
07b645589dcda8b7a5249e096fece2a67556f0f4 upstream.
There are no races as locked mount flags are guaranteed to never change.
Moving the test into do_remount makes it more visible, and ensures all
filesystem remounts pass the MNT_LOCK_READONLY permission check. This
second case is not an issue today as filesystem remounts are guarded
by capable(CAP_DAC_ADMIN) and thus will always fail in less privileged
mount namespaces, but it could become an issue in the future.
Acked-by: Serge E. Hallyn <serge.hallyn@ubuntu.com>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Mon, 28 Jul 2014 23:26:53 +0000 (16:26 -0700)]
mnt: Only change user settable mount flags in remount
commit
a6138db815df5ee542d848318e5dae681590fccd upstream.
Kenton Varda <kenton@sandstorm.io> discovered that by remounting a
read-only bind mount read-only in a user namespace the
MNT_LOCK_READONLY bit would be cleared, allowing an unprivileged user
to the remount a read-only mount read-write.
Correct this by replacing the mask of mount flags to preserve
with a mask of mount flags that may be changed, and preserve
all others. This ensures that any future bugs with this mask and
remount will fail in an easy to detect way where new mount flags
simply won't change.
Acked-by: Serge E. Hallyn <serge.hallyn@ubuntu.com>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Steven Rostedt (Red Hat) [Wed, 6 Aug 2014 19:36:31 +0000 (15:36 -0400)]
ring-buffer: Up rb_iter_peek() loop count to 3
commit
021de3d904b88b1771a3a2cfc5b75023c391e646 upstream.
After writting a test to try to trigger the bug that caused the
ring buffer iterator to become corrupted, I hit another bug:
WARNING: CPU: 1 PID: 5281 at kernel/trace/ring_buffer.c:3766 rb_iter_peek+0x113/0x238()
Modules linked in: ipt_MASQUERADE sunrpc [...]
CPU: 1 PID: 5281 Comm: grep Tainted: G W 3.16.0-rc3-test+ #143
Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
0000000000000000 ffffffff81809a80 ffffffff81503fb0 0000000000000000
ffffffff81040ca1 ffff8800796d6010 ffffffff810c138d ffff8800796d6010
ffff880077438c80 ffff8800796d6010 ffff88007abbe600 0000000000000003
Call Trace:
[<
ffffffff81503fb0>] ? dump_stack+0x4a/0x75
[<
ffffffff81040ca1>] ? warn_slowpath_common+0x7e/0x97
[<
ffffffff810c138d>] ? rb_iter_peek+0x113/0x238
[<
ffffffff810c138d>] ? rb_iter_peek+0x113/0x238
[<
ffffffff810c14df>] ? ring_buffer_iter_peek+0x2d/0x5c
[<
ffffffff810c6f73>] ? tracing_iter_reset+0x6e/0x96
[<
ffffffff810c74a3>] ? s_start+0xd7/0x17b
[<
ffffffff8112b13e>] ? kmem_cache_alloc_trace+0xda/0xea
[<
ffffffff8114cf94>] ? seq_read+0x148/0x361
[<
ffffffff81132d98>] ? vfs_read+0x93/0xf1
[<
ffffffff81132f1b>] ? SyS_read+0x60/0x8e
[<
ffffffff8150bf9f>] ? tracesys+0xdd/0xe2
Debugging this bug, which triggers when the rb_iter_peek() loops too
many times (more than 2 times), I discovered there's a case that can
cause that function to legitimately loop 3 times!
rb_iter_peek() is different than rb_buffer_peek() as the rb_buffer_peek()
only deals with the reader page (it's for consuming reads). The
rb_iter_peek() is for traversing the buffer without consuming it, and as
such, it can loop for one more reason. That is, if we hit the end of
the reader page or any page, it will go to the next page and try again.
That is, we have this:
1. iter->head > iter->head_page->page->commit
(rb_inc_iter() which moves the iter to the next page)
try again
2. event = rb_iter_head_event()
event->type_len == RINGBUF_TYPE_TIME_EXTEND
rb_advance_iter()
try again
3. read the event.
But we never get to 3, because the count is greater than 2 and we
cause the WARNING and return NULL.
Up the counter to 3.
Fixes: 69d1b839f7ee "ring-buffer: Bind time extend and data events together"
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Steven Rostedt (Red Hat) [Wed, 6 Aug 2014 18:11:33 +0000 (14:11 -0400)]
ring-buffer: Always reset iterator to reader page
commit
651e22f2701b4113989237c3048d17337dd2185c upstream.
When performing a consuming read, the ring buffer swaps out a
page from the ring buffer with a empty page and this page that
was swapped out becomes the new reader page. The reader page
is owned by the reader and since it was swapped out of the ring
buffer, writers do not have access to it (there's an exception
to that rule, but it's out of scope for this commit).
When reading the "trace" file, it is a non consuming read, which
means that the data in the ring buffer will not be modified.
When the trace file is opened, a ring buffer iterator is allocated
and writes to the ring buffer are disabled, such that the iterator
will not have issues iterating over the data.
Although the ring buffer disabled writes, it does not disable other
reads, or even consuming reads. If a consuming read happens, then
the iterator is reset and starts reading from the beginning again.
My tests would sometimes trigger this bug on my i386 box:
WARNING: CPU: 0 PID: 5175 at kernel/trace/trace.c:1527 __trace_find_cmdline+0x66/0xaa()
Modules linked in:
CPU: 0 PID: 5175 Comm: grep Not tainted 3.16.0-rc3-test+ #8
Hardware name: /DG965MQ, BIOS MQ96510J.86A.0372.2006.0605.1717 06/05/2006
00000000 00000000 f09c9e1c c18796b3 c1b5d74c f09c9e4c c103a0e3 c1b5154b
f09c9e78 00001437 c1b5d74c 000005f7 c10bd85a c10bd85a c1cac57c f09c9eb0
ed0e0000 f09c9e64 c103a185 00000009 f09c9e5c c1b5154b f09c9e78 f09c9e80^M
Call Trace:
[<
c18796b3>] dump_stack+0x4b/0x75
[<
c103a0e3>] warn_slowpath_common+0x7e/0x95
[<
c10bd85a>] ? __trace_find_cmdline+0x66/0xaa
[<
c10bd85a>] ? __trace_find_cmdline+0x66/0xaa
[<
c103a185>] warn_slowpath_fmt+0x33/0x35
[<
c10bd85a>] __trace_find_cmdline+0x66/0xaa^M
[<
c10bed04>] trace_find_cmdline+0x40/0x64
[<
c10c3c16>] trace_print_context+0x27/0xec
[<
c10c4360>] ? trace_seq_printf+0x37/0x5b
[<
c10c0b15>] print_trace_line+0x319/0x39b
[<
c10ba3fb>] ? ring_buffer_read+0x47/0x50
[<
c10c13b1>] s_show+0x192/0x1ab
[<
c10bfd9a>] ? s_next+0x5a/0x7c
[<
c112e76e>] seq_read+0x267/0x34c
[<
c1115a25>] vfs_read+0x8c/0xef
[<
c112e507>] ? seq_lseek+0x154/0x154
[<
c1115ba2>] SyS_read+0x54/0x7f
[<
c188488e>] syscall_call+0x7/0xb
---[ end trace
3f507febd6b4cc83 ]---
>>>> ##### CPU 1 buffer started ####
Which was the __trace_find_cmdline() function complaining about the pid
in the event record being negative.
After adding more test cases, this would trigger more often. Strangely
enough, it would never trigger on a single test, but instead would trigger
only when running all the tests. I believe that was the case because it
required one of the tests to be shutting down via delayed instances while
a new test started up.
After spending several days debugging this, I found that it was caused by
the iterator becoming corrupted. Debugging further, I found out why
the iterator became corrupted. It happened with the rb_iter_reset().
As consuming reads may not read the full reader page, and only part
of it, there's a "read" field to know where the last read took place.
The iterator, must also start at the read position. In the rb_iter_reset()
code, if the reader page was disconnected from the ring buffer, the iterator
would start at the head page within the ring buffer (where writes still
happen). But the mistake there was that it still used the "read" field
to start the iterator on the head page, where it should always start
at zero because readers never read from within the ring buffer where
writes occur.
I originally wrote a patch to have it set the iter->head to 0 instead
of iter->head_page->read, but then I questioned why it wasn't always
setting the iter to point to the reader page, as the reader page is
still valid. The list_empty(reader_page->list) just means that it was
successful in swapping out. But the reader_page may still have data.
There was a bug report a long time ago that was not reproducible that
had something about trace_pipe (consuming read) not matching trace
(iterator read). This may explain why that happened.
Anyway, the correct answer to this bug is to always use the reader page
an not reset the iterator to inside the writable ring buffer.
Fixes: d769041f8653 "ring_buffer: implement new locking"
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiri Kosina [Wed, 3 Sep 2014 13:04:28 +0000 (15:04 +0200)]
ACPI / cpuidle: fix deadlock between cpuidle_lock and cpu_hotplug.lock
commit
6726655dfdd2dc60c035c690d9f10cb69d7ea075 upstream.
There is a following AB-BA dependency between cpu_hotplug.lock and
cpuidle_lock:
1) cpu_hotplug.lock -> cpuidle_lock
enable_nonboot_cpus()
_cpu_up()
cpu_hotplug_begin()
LOCK(cpu_hotplug.lock)
cpu_notify()
...
acpi_processor_hotplug()
cpuidle_pause_and_lock()
LOCK(cpuidle_lock)
2) cpuidle_lock -> cpu_hotplug.lock
acpi_os_execute_deferred() workqueue
...
acpi_processor_cst_has_changed()
cpuidle_pause_and_lock()
LOCK(cpuidle_lock)
get_online_cpus()
LOCK(cpu_hotplug.lock)
Fix this by reversing the order acpi_processor_cst_has_changed() does
thigs -- let it first execute the protection against CPU hotplug by
calling get_online_cpus() and obtain the cpuidle lock only after that (and
perform the symmentric change when allowing CPUs hotplug again and
dropping cpuidle lock).
Spotted by lockdep.
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Lan Tianyu [Mon, 25 Aug 2014 23:29:24 +0000 (01:29 +0200)]
ACPI: Run fixed event device notifications in process context
commit
236105db632c6279a020f78c83e22eaef746006b upstream.
Currently, notify callbacks for fixed button events are run from
interrupt context. That is not necessary and after commit
0bf6368ee8f2
(ACPI / button: Add ACPI Button event via netlink routine) it causes
netlink routines to be called from interrupt context which is not
correct.
Also, that is different from non-fixed device events (including
non-fixed button events) whose notify callbacks are all executed from
process context.
For the above reasons, make fixed button device notify callbacks run
in process context which will avoid the deadlock when using netlink
to report button events to user space.
Fixes: 0bf6368ee8f2 (ACPI / button: Add ACPI Button event via netlink routine)
Link: https://lkml.org/lkml/2014/8/21/606
Reported-by: Benjamin Block <bebl@mageta.org>
Reported-by: Knut Petersen <Knut_Petersen@t-online.de>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
[rjw: Function names, subject and changelog.]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David E. Box [Tue, 8 Jul 2014 02:05:52 +0000 (10:05 +0800)]
ACPICA: Utilities: Fix memory leak in acpi_ut_copy_iobject_to_iobject
commit
8aa5e56eeb61a099ea6519eb30ee399e1bc043ce upstream.
Adds return status check on copy routines to delete the allocated destination
object if either copy fails. Reported by Colin Ian King on bugs.acpica.org,
Bug 1087.
The last applicable commit:
Commit:
3371c19c294a4cb3649aa4e84606be8a1d999e61
Subject: ACPICA: Remove ACPI_GET_OBJECT_TYPE macro
Link: https://bugs.acpica.org/show_bug.cgi?id=1087
Reported-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David E. Box <david.e.box@linux.intel.com>
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ben Hutchings [Sun, 8 Jun 2014 22:33:25 +0000 (23:33 +0100)]
bfa: Fix undefined bit shift on big-endian architectures with 32-bit DMA address
commit
03a6c3ff3282ee9fa893089304d951e0be93a144 upstream.
bfa_swap_words() shifts its argument (assumed to be 64-bit) by 32 bits
each way. In two places the argument type is dma_addr_t, which may be
32-bit, in which case the effect of the bit shift is undefined:
drivers/scsi/bfa/bfa_fcpim.c: In function 'bfa_ioim_send_ioreq':
drivers/scsi/bfa/bfa_fcpim.c:2497:4: warning: left shift count >= width of type [enabled by default]
addr = bfa_sgaddr_le(sg_dma_address(sg));
^
drivers/scsi/bfa/bfa_fcpim.c:2497:4: warning: right shift count >= width of type [enabled by default]
drivers/scsi/bfa/bfa_fcpim.c:2509:4: warning: left shift count >= width of type [enabled by default]
addr = bfa_sgaddr_le(sg_dma_address(sg));
^
drivers/scsi/bfa/bfa_fcpim.c:2509:4: warning: right shift count >= width of type [enabled by default]
Avoid this by adding casts to u64 in bfa_swap_words().
Compile-tested only.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Acked-by: Anil Gurumurthy <anil.gurumurthy@qlogic.com>
Fixes: f16a17507b09 ('[SCSI] bfa: remove all OS wrappers')
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Mack [Wed, 13 Aug 2014 19:51:06 +0000 (21:51 +0200)]
ASoC: pxa-ssp: drop SNDRV_PCM_FMTBIT_S24_LE
commit
9301503af016eb537ccce76adec0c1bb5c84871e upstream.
This mode is unsupported, as the DMA controller can't do zero-padding
of samples.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Reported-by: Johannes Stezenbach <js@sig21.net>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jarkko Nikula [Thu, 19 Jun 2014 06:32:05 +0000 (09:32 +0300)]
ASoC: max98090: Fix missing free_irq
commit
4adeb0ccf86a5af1825bbfe290dee9e60a5ab870 upstream.
max98090.c doesn't free the threaded interrupt it requests. This causes
an oops when doing "cat /proc/interrupts" after snd-soc-max98090.ko is
unloaded.
Fix this by requesting the interrupt by using devm_request_threaded_irq().
Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sylwester Nawrocki [Fri, 4 Jul 2014 14:05:45 +0000 (16:05 +0200)]
ASoC: samsung: Correct I2S DAI suspend/resume ops
commit
d3d4e5247b013008a39e4d5f69ce4c60ed57f997 upstream.
We should save/restore relevant I2S registers regardless of
the dai->active flag, otherwise some settings are being lost
after system suspend/resume cycle. E.g. I2S slave mode set only
during dai initialization is not preserved and the device ends
up in master mode after system resume.
Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Praveen Diwakar [Fri, 4 Jul 2014 05:47:41 +0000 (11:17 +0530)]
ASoC: wm_adsp: Add missing MODULE_LICENSE
commit
0a37c6efec4a2fdc2563c5a8faa472b814deee80 upstream.
Since MODULE_LICENSE is missing the module load fails,
so add this for module.
Signed-off-by: Praveen Diwakar <praveen.diwakar@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Reviewed-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Qiao Zhou [Wed, 4 Jun 2014 11:42:06 +0000 (19:42 +0800)]
ASoC: pcm: fix dpcm_path_put in dpcm runtime update
commit
7ed9de76ff342cbd717a9cf897044b99272cb8f8 upstream.
we need to release dapm widget list after dpcm_path_get in
soc_dpcm_runtime_update. otherwise, there will be potential memory
leak. add dpcm_path_put to fix it.
Signed-off-by: Qiao Zhou <zhouqiao@marvell.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jonas Bonn [Sun, 19 Feb 2012 16:36:53 +0000 (17:36 +0100)]
openrisc: Rework signal handling
commit
10f67dbf6add97751050f294d4c8e0cc1e5c2c23 upstream.
The mainline signal handling code for OpenRISC has been buggy since day
one with respect to syscall restart. This patch significantly reworks
the signal handling code:
i) Move the "work pending" loop to C code (borrowed from ARM arch)
ii) Allow a tracer to muck about with the IP and skip syscall restart
in that case (again, borrowed from ARM)
iii) Make signal handling WRT syscall restart actually work
v) Make the signal handling code look more like that of other
architectures so that it's easier for others to follow
Reported-by: Anders Nystrom <anders@southpole.se>
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Cc: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ralf Baechle [Tue, 17 Sep 2013 10:44:31 +0000 (12:44 +0200)]
MIPS: Fix accessing to per-cpu data when flushing the cache
commit
ff522058bd717506b2fa066fa564657f2b86477e upstream.
This fixes the following issue
BUG: using smp_processor_id() in preemptible [
00000000] code: kjournald/1761
caller is blast_dcache32+0x30/0x254
Call Trace:
[<
8047f02c>] dump_stack+0x8/0x34
[<
802e7e40>] debug_smp_processor_id+0xe0/0xf0
[<
80114d94>] blast_dcache32+0x30/0x254
[<
80118484>] r4k_dma_cache_wback_inv+0x200/0x288
[<
80110ff0>] mips_dma_map_sg+0x108/0x180
[<
80355098>] ide_dma_prepare+0xf0/0x1b8
[<
8034eaa4>] do_rw_taskfile+0x1e8/0x33c
[<
8035951c>] ide_do_rw_disk+0x298/0x3e4
[<
8034a3c4>] do_ide_request+0x2e0/0x704
[<
802bb0dc>] __blk_run_queue+0x44/0x64
[<
802be000>] queue_unplugged.isra.36+0x1c/0x54
[<
802beb94>] blk_flush_plug_list+0x18c/0x24c
[<
802bec6c>] blk_finish_plug+0x18/0x48
[<
8026554c>] journal_commit_transaction+0x3b8/0x151c
[<
80269648>] kjournald+0xec/0x238
[<
8014ac00>] kthread+0xb8/0xc0
[<
8010268c>] ret_from_kernel_thread+0x14/0x1c
Caches in most systems are identical - but not always, so we can't avoid
the use of smp_call_function() by just looking at the boot CPU's data,
have to fiddle with preemption instead.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/5835
Cc: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Aaro Koskinen [Tue, 22 Jul 2014 11:51:08 +0000 (14:51 +0300)]
MIPS: OCTEON: make get_system_type() thread-safe
commit
608308682addfdc7b8e2aee88f0e028331d88e4d upstream.
get_system_type() is not thread-safe on OCTEON. It uses static data,
also more dangerous issue is that it's calling cvmx_fuse_read_byte()
every time without any synchronization. Currently it's possible to get
processes stuck looping forever in kernel simply by launching multiple
readers of /proc/cpuinfo:
(while true; do cat /proc/cpuinfo > /dev/null; done) &
(while true; do cat /proc/cpuinfo > /dev/null; done) &
...
Fix by initializing the system type string only once during the early
boot.
Signed-off-by: Aaro Koskinen <aaro.koskinen@nsn.com>
Reviewed-by: Markos Chandras <markos.chandras@imgtec.com>
Patchwork: http://patchwork.linux-mips.org/patch/7437/
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Markos Chandras [Wed, 22 Jan 2014 14:40:00 +0000 (14:40 +0000)]
MIPS: asm: thread_info: Add _TIF_SECCOMP flag
commit
137f7df8cead00688524c82360930845396b8a21 upstream.
Add _TIF_SECCOMP flag to _TIF_WORK_SYSCALL_ENTRY to indicate
that the system call needs to be checked against a seccomp filter.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6405/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
[bwh: Backported to 3.2: various other flags are not included in
_TIF_WORK_SYSCALL_ENTRY]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ralf Baechle [Tue, 28 May 2013 23:02:18 +0000 (01:02 +0200)]
MIPS: Cleanup flags in syscall flags handlers.
commit
e7f3b48af7be9f8007a224663a5b91340626fed5 upstream.
This will simplify further modifications.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Smith [Wed, 23 Jul 2014 13:40:08 +0000 (14:40 +0100)]
MIPS: asm/reg.h: Make 32- and 64-bit definitions available at the same time
commit
bcec7c8da6b092b1ff3327fd83c2193adb12f684 upstream.
Get rid of the WANT_COMPAT_REG_H test and instead define both the 32-
and 64-bit register offset definitions at the same time with
MIPS{32,64}_ prefixes, then define the existing EF_* names to the
correct definitions for the kernel's bitness.
This patch is a prerequisite of the following bug fix patch.
Signed-off-by: Alex Smith <alex@alex-smith.me.uk>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7451/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Huacai Chen [Wed, 16 Jul 2014 01:19:16 +0000 (09:19 +0800)]
MIPS: Remove BUG_ON(!is_fpu_owner()) in do_ade()
commit
2e5767a27337812f6850b3fa362419e2f085e5c3 upstream.
In do_ade(), is_fpu_owner() isn't preempt-safe. For example, when an
unaligned ldc1 is executed, do_cpu() is called and then FPU will be
enabled (and TIF_USEDFPU will be set for the current process). Then,
do_ade() is called because the access is unaligned. If the current
process is preempted at this time, TIF_USEDFPU will be cleard. So when
the process is scheduled again, BUG_ON(!is_fpu_owner()) is triggered.
This small program can trigger this BUG in a preemptible kernel:
int main (int argc, char *argv[])
{
double u64[2];
while (1) {
asm volatile (
".set push \n\t"
".set noreorder \n\t"
"ldc1 $f3, 4(%0) \n\t"
".set pop \n\t"
::"r"(u64):
);
}
return 0;
}
V2: Remove the BUG_ON() unconditionally due to Paul's suggestion.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Jie Chen <chenj@lemote.com>
Signed-off-by: Rui Wang <wangr@lemote.com>
Cc: John Crispin <john@phrozen.org>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Huacai Chen [Tue, 29 Jul 2014 06:54:40 +0000 (14:54 +0800)]
MIPS: tlbex: Fix a missing statement for HUGETLB
commit
8393c524a25609a30129e4a8975cf3b91f6c16a5 upstream.
In commit
2c8c53e28f1 (MIPS: Optimize TLB handlers for Octeon CPUs)
build_r4000_tlb_refill_handler() is modified. But it doesn't compatible
with the original code in HUGETLB case. Because there is a copy & paste
error and one line of code is missing. It is very easy to produce a bug
with LTP's hugemmap05 test.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Binbin Zhou <zhoubb@lemote.com>
Cc: John Crispin <john@phrozen.org>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Patchwork: https://patchwork.linux-mips.org/patch/7496/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paul Burton [Tue, 22 Jul 2014 13:21:21 +0000 (14:21 +0100)]
MIPS: Prevent user from setting FCSR cause bits
commit
b1442d39fac2fcfbe6a4814979020e993ca59c9e upstream.
If one or more matching FCSR cause & enable bits are set in saved thread
context then when that context is restored the kernel will take an FP
exception. This is of course undesirable and considered an oops, leading
to the kernel writing a backtrace to the console and potentially
rebooting depending upon the configuration. Thus the kernel avoids this
situation by clearing the cause bits of the FCSR register when handling
FP exceptions and after emulating FP instructions.
However the kernel does not prevent userland from setting arbitrary FCSR
cause & enable bits via ptrace, using either the PTRACE_POKEUSR or
PTRACE_SETFPREGS requests. This means userland can trivially cause the
kernel to oops on any system with an FPU. Prevent this from happening
by clearing the cause bits when writing to the saved FCSR context via
ptrace.
This problem appears to exist at least back to the beginning of the git
era in the PTRACE_POKEUSR case.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/7438/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jeffrey Deans [Thu, 17 Jul 2014 08:20:56 +0000 (09:20 +0100)]
MIPS: GIC: Prevent array overrun
commit
ffc8415afab20bd97754efae6aad1f67b531132b upstream.
A GIC interrupt which is declared as having a GIC_MAP_TO_NMI_MSK
mapping causes the cpu parameter to gic_setup_intr() to be increased
to 32, causing memory corruption when pcpu_masks[] is written to again
later in the function.
Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7375/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
K. Y. Srinivasan [Sat, 12 Jul 2014 16:48:32 +0000 (09:48 -0700)]
drivers: scsi: storvsc: Correctly handle TEST_UNIT_READY failure
commit
3533f8603d28b77c62d75ec899449a99bc6b77a1 upstream.
On some Windows hosts on FC SANs, TEST_UNIT_READY can return SRB_STATUS_ERROR.
Correctly handle this. Note that there is sufficient sense information to
support scsi error handling even in this case.
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
K. Y. Srinivasan [Sat, 12 Jul 2014 16:48:30 +0000 (09:48 -0700)]
Drivers: scsi: storvsc: Implement a eh_timed_out handler
commit
56b26e69c8283121febedd12b3cc193384af46b9 upstream.
On Azure, we have seen instances of unbounded I/O latencies. To deal with
this issue, implement handler that can reset the timeout. Note that the
host gaurantees that it will respond to each command that has been issued.
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
[hch: added a better comment explaining the issue]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gavin Shan [Mon, 11 Aug 2014 09:16:19 +0000 (19:16 +1000)]
powerpc/pseries: Failure on removing device node
commit
f1b3929c232784580e5d8ee324b6bc634e709575 upstream.
While running command "drmgr -c phb -r -s 'PHB 528'", following
backtrace jumped out because the target device node isn't marked
with OF_DETACHED by of_detach_node(), which caused by error
returned from memory hotplug related reconfig notifier when
disabling CONFIG_MEMORY_HOTREMOVE. The patch fixes it.
ERROR: Bad of_node_put() on /pci@
800000020000210/ethernet@0
CPU: 14 PID: 2252 Comm: drmgr Tainted: G W 3.16.0+ #427
Call Trace:
[
c000000012a776a0] [
c000000000013d9c] .show_stack+0x88/0x148 (unreliable)
[
c000000012a77750] [
c00000000083cd34] .dump_stack+0x7c/0x9c
[
c000000012a777d0] [
c0000000006807c4] .of_node_release+0x58/0xe0
[
c000000012a77860] [
c00000000038a7d0] .kobject_release+0x174/0x1b8
[
c000000012a77900] [
c00000000038a884] .kobject_put+0x70/0x78
[
c000000012a77980] [
c000000000681680] .of_node_put+0x28/0x34
[
c000000012a77a00] [
c000000000681ea8] .__of_get_next_child+0x64/0x70
[
c000000012a77a90] [
c000000000682138] .of_find_node_by_path+0x1b8/0x20c
[
c000000012a77b40] [
c000000000051840] .ofdt_write+0x308/0x688
[
c000000012a77c20] [
c000000000238430] .proc_reg_write+0xb8/0xd4
[
c000000012a77cd0] [
c0000000001cbeac] .vfs_write+0xec/0x1f8
[
c000000012a77d70] [
c0000000001cc3b0] .SyS_write+0x58/0xa0
[
c000000012a77e30] [
c00000000000a064] syscall_exit+0x0/0x98
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Aneesh Kumar K.V [Wed, 13 Aug 2014 07:02:03 +0000 (12:32 +0530)]
powerpc/mm: Use read barrier when creating real_pte
commit
85c1fafd7262e68ad821ee1808686b1392b1167d upstream.
On ppc64 we support 4K hash pte with 64K page size. That requires
us to track the hash pte slot information on a per 4k basis. We do that
by storing the slot details in the second half of pte page. The pte bit
_PAGE_COMBO is used to indicate whether the second half need to be
looked while building real_pte. We need to use read memory barrier while
doing that so that load of hidx is not reordered w.r.t _PAGE_COMBO
check. On the store side we already do a lwsync in __hash_page_4K
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrey Utkin [Mon, 4 Aug 2014 20:13:10 +0000 (23:13 +0300)]
powerpc/mm/numa: Fix break placement
commit
b00fc6ec1f24f9d7af9b8988b6a198186eb3408c upstream.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=81631
Reported-by: David Binderman <dcb314@hotmail.com>
Signed-off-by: Andrey Utkin <andrey.krieger.utkin@gmail.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nikesh Oswal [Fri, 4 Jul 2014 08:55:16 +0000 (09:55 +0100)]
regulator: arizona-ldo1: remove bypass functionality
commit
5b919f3ebb533cbe400664837e24f66a0836b907 upstream.
WM5110/8280 devices do not support bypass mode for LDO1 so remove
the bypass callbacks registered with regulator core.
Signed-off-by: Nikesh Oswal <nikesh@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael Welling [Mon, 28 Jul 2014 23:01:04 +0000 (18:01 -0500)]
mfd: omap-usb-host: Fix improper mask use.
commit
46de8ff8e80a6546aa3d2fdf58c6776666301a0c upstream.
single-ulpi-bypass is a flag used for older OMAP3 silicon.
The flag when set, can excite code that improperly uses the
OMAP_UHH_HOSTCONFIG_UPLI_BYPASS define to clear the corresponding bit.
Instead it clears all of the other bits disabling all of the ports in
the process.
Signed-off-by: Michael Welling <mwelling@emacinc.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sasha Levin [Wed, 6 Aug 2014 23:08:14 +0000 (16:08 -0700)]
kernel/smp.c:on_each_cpu_cond(): fix warning in fallback path
commit
618fde872163e782183ce574c77f1123e2be8887 upstream.
The rarely-executed memry-allocation-failed callback path generates a
WARN_ON_ONCE() when smp_call_function_single() succeeds. Presumably
it's supposed to warn on failures.
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Christoph Lameter <cl@gentwo.org>
Cc: Gilad Ben-Yossef <gilad@benyossef.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Tejun Heo <htejun@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Paris [Wed, 23 Jul 2014 19:36:26 +0000 (15:36 -0400)]
CAPABILITIES: remove undefined caps from all processes
commit
7d8b6c63751cfbbe5eef81a48c22978b3407a3ad upstream.
This is effectively a revert of
7b9a7ec565505699f503b4fcf61500dceb36e744
plus fixing it a different way...
We found, when trying to run an application from an application which
had dropped privs that the kernel does security checks on undefined
capability bits. This was ESPECIALLY difficult to debug as those
undefined bits are hidden from /proc/$PID/status.
Consider a root application which drops all capabilities from ALL 4
capability sets. We assume, since the application is going to set
eff/perm/inh from an array that it will clear not only the defined caps
less than CAP_LAST_CAP, but also the higher 28ish bits which are
undefined future capabilities.
The BSET gets cleared differently. Instead it is cleared one bit at a
time. The problem here is that in security/commoncap.c::cap_task_prctl()
we actually check the validity of a capability being read. So any task
which attempts to 'read all things set in bset' followed by 'unset all
things set in bset' will not even attempt to unset the undefined bits
higher than CAP_LAST_CAP.
So the 'parent' will look something like:
CapInh:
0000000000000000
CapPrm:
0000000000000000
CapEff:
0000000000000000
CapBnd:
ffffffc000000000
All of this 'should' be fine. Given that these are undefined bits that
aren't supposed to have anything to do with permissions. But they do...
So lets now consider a task which cleared the eff/perm/inh completely
and cleared all of the valid caps in the bset (but not the invalid caps
it couldn't read out of the kernel). We know that this is exactly what
the libcap-ng library does and what the go capabilities library does.
They both leave you in that above situation if you try to clear all of
you capapabilities from all 4 sets. If that root task calls execve()
the child task will pick up all caps not blocked by the bset. The bset
however does not block bits higher than CAP_LAST_CAP. So now the child
task has bits in eff which are not in the parent. These are
'meaningless' undefined bits, but still bits which the parent doesn't
have.
The problem is now in cred_cap_issubset() (or any operation which does a
subset test) as the child, while a subset for valid cap bits, is not a
subset for invalid cap bits! So now we set durring commit creds that
the child is not dumpable. Given it is 'more priv' than its parent. It
also means the parent cannot ptrace the child and other stupidity.
The solution here:
1) stop hiding capability bits in status
This makes debugging easier!
2) stop giving any task undefined capability bits. it's simple, it you
don't put those invalid bits in CAP_FULL_SET you won't get them in init
and you won't get them in any other task either.
This fixes the cap_issubset() tests and resulting fallout (which
made the init task in a docker container untraceable among other
things)
3) mask out undefined bits when sys_capset() is called as it might use
~0, ~0 to denote 'all capabilities' for backward/forward compatibility.
This lets 'capsh --caps="all=eip" -- -c /bin/bash' run.
4) mask out undefined bit when we read a file capability off of disk as
again likely all bits are set in the xattr for forward/backward
compatibility.
This lets 'setcap all+pe /bin/bash; /bin/bash' run
Signed-off-by: Eric Paris <eparis@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: Andrew Vagin <avagin@openvz.org>
Cc: Andrew G. Morgan <morgan@kernel.org>
Cc: Serge E. Hallyn <serge.hallyn@canonical.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Steve Grubb <sgrubb@redhat.com>
Cc: Dan Walsh <dwalsh@redhat.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jarkko Sakkinen [Fri, 9 May 2014 11:23:10 +0000 (14:23 +0300)]
tpm: missing tpm_chip_put in tpm_get_random()
commit
3e14d83ef94a5806a865b85b513b4e891923c19b upstream.
Regression in
41ab999c. Call to tpm_chip_put is missing. This
will cause TPM device driver not to unload if tmp_get_random()
is called.
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Peter Huewe <peterhuewe@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guenter Roeck [Wed, 13 Aug 2014 18:21:34 +0000 (11:21 -0700)]
firmware: Do not use WARN_ON(!spin_is_locked())
commit
aee530cfecf4f3ec83b78406bac618cec35853f8 upstream.
spin_is_locked() always returns false for uniprocessor configurations
in several architectures, so do not use WARN_ON with it.
Use lockdep_assert_held() instead to also reduce overhead in
non-debug kernels.
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mark A. Greer [Wed, 2 Jul 2014 03:28:32 +0000 (20:28 -0700)]
spi: omap2-mcspi: Configure hardware when slave driver changes mode
commit
97ca0d6cc118716840ea443e010cb3d5f2d25eaf upstream.
Commit id
2bd16e3e23d9df41592c6b257c59b6860a9cc3ea
(spi: omap2-mcspi: Do not configure the controller
on each transfer unless needed) does its job too
well so omap2_mcspi_setup_transfer() isn't called
even when an SPI slave driver changes 'spi->mode'.
The result is that the mode requested by the SPI
slave driver never takes effect.
Fix this by adding the 'mode' member to the
omap2_mcspi_cs structure which holds the mode
value that the hardware is configured for.
When the SPI slave driver changes 'spi->mode'
it will be different than the value of this new
member and the SPI master driver will know that
the hardware must be reconfigured (by calling
omap2_mcspi_setup_transfer()).
Fixes: 2bd16e3e23 (spi: omap2-mcspi: Do not configure the controller on each transfer unless needed)
Signed-off-by: Mark A. Greer <mgreer@animalcreek.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Petazzoni [Sun, 27 Jul 2014 21:53:19 +0000 (23:53 +0200)]
spi: orion: fix incorrect handling of cell-index DT property
commit
e06871cd2c92e5c65d7ca1d32866b4ca5dd4ac30 upstream.
In commit
f814f9ac5a81 ("spi/orion: add device tree binding"), Device
Tree support was added to the spi-orion driver. However, this commit
reads the "cell-index" property, without taking into account the fact
that DT properties are big-endian encoded.
Since most of the platforms using spi-orion with DT have apparently
not used anything but cell-index = <0>, the problem was not
visible. But as soon as one starts using cell-index = <1>, the problem
becomes clearly visible, as the master->bus_num gets a wrong value
(actually it gets the value 0, which conflicts with the first bus that
has cell-index = <0>).
This commit fixes that by using of_property_read_u32() to read the
property value, which does the appropriate endianness conversion when
needed.
Fixes: f814f9ac5a81 ("spi/orion: add device tree binding")
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Joerg Roedel [Tue, 5 Aug 2014 15:50:15 +0000 (17:50 +0200)]
iommu/amd: Fix cleanup_domain for mass device removal
commit
9b29d3c6510407d91786c1cf9183ff4debb3473a upstream.
When multiple devices are detached in __detach_device, they
are also removed from the domains dev_list. This makes it
unsafe to use list_for_each_entry_safe, as the next pointer
might also not be in the list anymore after __detach_device
returns. So just repeatedly remove the first element of the
list until it is empty.
Tested-by: Marti Raudsepp <marti@juffo.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Salva Peiró [Sat, 7 Jun 2014 14:41:44 +0000 (11:41 -0300)]
media: media-device: Remove duplicated memset() in media_enum_entities()
commit
f8ca6ac00d2ba24c5557f08f81439cd3432f0802 upstream.
After the zeroing the whole struct struct media_entity_desc u_ent,
it is no longer necessary to memset(0) its u_ent.name field.
Signed-off-by: Salva Peiró <speiro@ai2.upv.es>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mauro Carvalho Chehab [Sun, 8 Jun 2014 16:54:57 +0000 (13:54 -0300)]
media: au0828: Only alt setting logic when needed
commit
64ea37bbd8a5815522706f0099ad3f11c7537e15 upstream.
It seems that there's a bug at au0828 hardware/firmware
related to alternate setting: when the device is already at
alt 5, a further call causes the URBs to receive -ESHUTDOWN.
I found two different encarnations of this issue:
1) at qv4l2, it fails the second time we try to open the
video screen;
2) at xawtv, when audio underrun occurs, with is very
frequent, at least on my test machine.
The fix is simple: just check if alt=5 before calling
set_usb_interface().
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mauro Carvalho Chehab [Mon, 21 Jul 2014 16:28:15 +0000 (13:28 -0300)]
media: xc4000: Fix get_frequency()
commit
4c07e32884ab69574cfd9eb4de3334233c938071 upstream.
The programmed frequency on xc4000 is not the middle
frequency, but the initial frequency on the bandwidth range.
However, the DVB API works with the middle frequency.
This works fine on set_frontend, as the device calculates
the needed offset. However, at get_frequency(), the returned
value is the initial frequency. That's generally not a big
problem on most drivers, however, starting with changeset
6fe1099c7aec, the frequency drift is taken into account at
dib7000p driver.
This broke support for PCTV 340e, with uses dib7000p demod and
xc4000 tuner.
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mauro Carvalho Chehab [Mon, 21 Jul 2014 17:21:18 +0000 (14:21 -0300)]
media: xc5000: Fix get_frequency()
commit
a3eec916cbc17dc1aaa3ddf120836cd5200eb4ef upstream.
The programmed frequency on xc5000 is not the middle
frequency, but the initial frequency on the bandwidth range.
However, the DVB API works with the middle frequency.
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Fri, 5 Sep 2014 23:32:00 +0000 (16:32 -0700)]
Linux 3.10.54
Greg Kroah-Hartman [Wed, 27 Aug 2014 23:55:29 +0000 (16:55 -0700)]
USB: fix build error with CONFIG_PM_RUNTIME disabled
commit
a9ef803d740bfadf5e505fbc57efa57692e27025 upstream.
commit
bdd405d2a528 ("usb: hub: Prevent hub autosuspend if
usbcore.autosuspend is -1") causes a build error if CONFIG_PM_RUNTIME is
disabled. Fix that by doing a simple #ifdef guard around it.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Cc: Roger Quadros <rogerq@ti.com>
Cc: Michael Welling <mwelling@emacinc.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Trond Myklebust [Tue, 26 Aug 2014 02:33:12 +0000 (22:33 -0400)]
NFSv4: Fix problems with close in the presence of a delegation
commit
aee7af356e151494d5014f57b33460b162f181b5 upstream.
In the presence of delegations, we can no longer assume that the
state->n_rdwr, state->n_rdonly, state->n_wronly reflect the open
stateid share mode, and so we need to calculate the initial value
for calldata->arg.fmode using the state->flags.
Reported-by: James Drews <drews@engr.wisc.edu>
Fixes: 88069f77e1ac5 (NFSv41: Fix a potential state leakage when...)
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>