wrapfs-4.13.y.git
9 years agoefi: Fix out-of-bounds read in variable_matches()
Laszlo Ersek [Thu, 21 Apr 2016 16:21:11 +0000 (18:21 +0200)]
efi: Fix out-of-bounds read in variable_matches()

[ Upstream commit 630ba0cc7a6dbafbdee43795617c872b35cde1b4 ]

The variable_matches() function can currently read "var_name[len]", for
example when:

 - var_name[0] == 'a',
 - len == 1
 - match_name points to the NUL-terminated string "ab".

This function is supposed to accept "var_name" inputs that are not
NUL-terminated (hence the "len" parameter"). Document the function, and
access "var_name[*match]" only if "*match" is smaller than "len".

Reported-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Cc: Peter Jones <pjones@redhat.com>
Cc: Matthew Garrett <mjg59@coreos.com>
Cc: Jason Andryuk <jandryuk@gmail.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: <stable@vger.kernel.org> # v3.10+
Link: http://thread.gmane.org/gmane.comp.freedesktop.xorg.drivers.intel/86906
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoiio: ak8975: Fix NULL pointer exception on early interrupt
Krzysztof Kozlowski [Mon, 4 Apr 2016 05:54:59 +0000 (14:54 +0900)]
iio: ak8975: Fix NULL pointer exception on early interrupt

[ Upstream commit 07d2390e36ee5b3265e9cc8305f2a106c8721e16 ]

In certain probe conditions the interrupt came right after registering
the handler causing a NULL pointer exception because of uninitialized
waitqueue:

$ udevadm trigger
i2c-gpio i2c-gpio-1: using pins 143 (SDA) and 144 (SCL)
i2c-gpio i2c-gpio-3: using pins 53 (SDA) and 52 (SCL)
Unable to handle kernel NULL pointer dereference at virtual address 00000000
pgd = e8b38000
[00000000] *pgd=00000000
Internal error: Oops: 5 [#1] SMP ARM
Modules linked in: snd_soc_i2s(+) i2c_gpio(+) snd_soc_idma snd_soc_s3c_dma snd_soc_core snd_pcm_dmaengine snd_pcm snd_timer snd soundcore ac97_bus spi_s3c64xx pwm_samsung dwc2 exynos_adc phy_exynos_usb2 exynosdrm exynos_rng rng_core rtc_s3c
CPU: 0 PID: 717 Comm: data-provider-m Not tainted 4.6.0-rc1-next-20160401-00011-g1b8d87473b9e-dirty #101
Hardware name: SAMSUNG EXYNOS (Flattened Device Tree)
(...)
(__wake_up_common) from [<c0379624>] (__wake_up+0x38/0x4c)
(__wake_up) from [<c0a41d30>] (ak8975_irq_handler+0x28/0x30)
(ak8975_irq_handler) from [<c0386720>] (handle_irq_event_percpu+0x88/0x140)
(handle_irq_event_percpu) from [<c038681c>] (handle_irq_event+0x44/0x68)
(handle_irq_event) from [<c0389c40>] (handle_edge_irq+0xf0/0x19c)
(handle_edge_irq) from [<c0385e04>] (generic_handle_irq+0x24/0x34)
(generic_handle_irq) from [<c05ee360>] (exynos_eint_gpio_irq+0x50/0x68)
(exynos_eint_gpio_irq) from [<c0386720>] (handle_irq_event_percpu+0x88/0x140)
(handle_irq_event_percpu) from [<c038681c>] (handle_irq_event+0x44/0x68)
(handle_irq_event) from [<c0389a70>] (handle_fasteoi_irq+0xb4/0x194)
(handle_fasteoi_irq) from [<c0385e04>] (generic_handle_irq+0x24/0x34)
(generic_handle_irq) from [<c03860b4>] (__handle_domain_irq+0x5c/0xb4)
(__handle_domain_irq) from [<c0301774>] (gic_handle_irq+0x54/0x94)
(gic_handle_irq) from [<c030c910>] (__irq_usr+0x50/0x80)

The bug was reproduced on exynos4412-trats2 (with a max77693 device also
using i2c-gpio) after building max77693 as a module.

Cc: <stable@vger.kernel.org>
Fixes: 94a6d5cf7caa ("iio:ak8975 Implement data ready interrupt handling")
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Tested-by: Gregor Boirie <gregor.boirie@parrot.com>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoregmap: spmi: Fix regmap_spmi_ext_read in multi-byte case
Jack Pham [Fri, 15 Apr 2016 06:37:26 +0000 (23:37 -0700)]
regmap: spmi: Fix regmap_spmi_ext_read in multi-byte case

[ Upstream commit dec8e8f6e6504aa3496c0f7cc10c756bb0e10f44 ]

Specifically for the case of reads that use the Extended Register
Read Long command, a multi-byte read operation is broken up into
8-byte chunks.  However the call to spmi_ext_register_readl() is
incorrectly passing 'val_size', which if greater than 8 will
always fail.  The argument should instead be 'len'.

Fixes: c9afbb05a9ff ("regmap: spmi: support base and extended register spaces")
Signed-off-by: Jack Pham <jackp@codeaurora.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoata: ahci-platform: Add ports-implemented DT bindings.
Srinivas Kandagatla [Fri, 1 Apr 2016 07:52:57 +0000 (08:52 +0100)]
ata: ahci-platform: Add ports-implemented DT bindings.

[ Upstream commit 17dcc37e3e847bc0e67a5b1ec52471fcc6c18682 ]

On some SOCs PORTS_IMPL register value is never programmed by the
firmware and left at zero value. Which means that no sata ports are
available for software. AHCI driver used to cope up with this by
fabricating the port_map if the PORTS_IMPL register is read zero,
but recent patch broke this workaround as zero value was valid for
NVMe disks.

This patch adds ports-implemented DT bindings as workaround for this issue
in a way that DT can can override the PORTS_IMPL register in cases where
the firmware did not program it already.

Fixes: 566d1827df2e ("libata: disable forced PORTS_IMPL for >= AHCI 1.3")
Cc: stable@vger.kernel.org # v4.5+
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Acked-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Andy Gross <andy.gross@linaro.org>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agolibahci: save port map for forced port map
Srinivas Kandagatla [Fri, 1 Apr 2016 07:52:56 +0000 (08:52 +0100)]
libahci: save port map for forced port map

[ Upstream commit 2fd0f46cb1b82587c7ae4a616d69057fb9bd0af7 ]

In usecases where force_port_map is used saved_port_map is never set,
resulting in not programming the PORTS_IMPL register as part of initial
config. This patch fixes this by setting it to port_map even in case
where force_port_map is used, making it more inline with other parts of
the code.

Fixes: 566d1827df2e ("libata: disable forced PORTS_IMPL for >= AHCI 1.3")
Cc: stable@vger.kernel.org # v4.5+
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Acked-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Andy Gross <andy.gross@linaro.org>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoregulator: s2mps11: Fix invalid selector mask and voltages for buck9
Krzysztof Kozlowski [Mon, 28 Mar 2016 04:09:56 +0000 (13:09 +0900)]
regulator: s2mps11: Fix invalid selector mask and voltages for buck9

[ Upstream commit 3b672623079bb3e5685b8549e514f2dfaa564406 ]

The buck9 regulator of S2MPS11 PMIC had incorrect vsel_mask (0xff
instead of 0x1f) thus reading entire register as buck9's voltage. This
effectively caused regulator core to interpret values as higher voltages
than they were and then to set real voltage much lower than intended.

The buck9 provides power to other regulators, including LDO13
and LDO19 which supply the MMC2 (SD card). On Odroid XU3/XU4 the lower
voltage caused SD card detection errors on Odroid XU3/XU4:
mmc1: card never left busy state
mmc1: error -110 whilst initialising SD card

During driver probe the regulator core was checking whether initial
voltage matches the constraints. With incorrect vsel_mask of 0xff and
default value of 0x50, the core interpreted this as 5 V which is outside
of constraints (3-3.775 V). Then the regulator core was adjusting the
voltage to match the constraints. With incorrect vsel_mask this new
voltage mapped to a vere low voltage in the driver.

Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Reviewed-by: Javier Martinez Canillas <javier@osg.samsung.com>
Tested-by: Javier Martinez Canillas <javier@osg.samsung.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoASoC: rt5640: Correct the digital interface data select
Sugar Zhang [Fri, 18 Mar 2016 06:54:22 +0000 (14:54 +0800)]
ASoC: rt5640: Correct the digital interface data select

[ Upstream commit 653aa4645244042826f105aab1be3d01b3d493ca ]

this patch corrects the interface adc/dac control register definition
according to datasheet.

Signed-off-by: Sugar Zhang <sugar.zhang@rock-chips.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agotimers: Use proper base migration in add_timer_on()
Tejun Heo [Mon, 16 May 2016 06:33:01 +0000 (09:33 +0300)]
timers: Use proper base migration in add_timer_on()

[ Upstream commit 22b886dd1018093920c4250dee2a9a3cb7cff7b8 ]

Regardless of the previous CPU a timer was on, add_timer_on()
currently simply sets timer->flags to the new CPU.  As the caller must
be seeing the timer as idle, this is locally fine, but the timer
leaving the old base while unlocked can lead to race conditions as
follows.

Let's say timer was on cpu 0.

  cpu 0 cpu 1
  -----------------------------------------------------------------------------
  del_timer(timer) succeeds
del_timer(timer)
  lock_timer_base(timer) locks cpu_0_base
  add_timer_on(timer, 1)
    spin_lock(&cpu_1_base->lock)
    timer->flags set to cpu_1_base
    operates on @timer   operates on @timer

This triggered with mod_delayed_work_on() which contains
"if (del_timer()) add_timer_on()" sequence eventually leading to the
following oops.

  BUG: unable to handle kernel NULL pointer dereference at           (null)
  IP: [<ffffffff810ca6e9>] detach_if_pending+0x69/0x1a0
  ...
  Workqueue: wqthrash wqthrash_workfunc [wqthrash]
  task: ffff8800172ca680 ti: ffff8800172d0000 task.ti: ffff8800172d0000
  RIP: 0010:[<ffffffff810ca6e9>]  [<ffffffff810ca6e9>] detach_if_pending+0x69/0x1a0
  ...
  Call Trace:
   [<ffffffff810cb0b4>] del_timer+0x44/0x60
   [<ffffffff8106e836>] try_to_grab_pending+0xb6/0x160
   [<ffffffff8106e913>] mod_delayed_work_on+0x33/0x80
   [<ffffffffa0000081>] wqthrash_workfunc+0x61/0x90 [wqthrash]
   [<ffffffff8106dba8>] process_one_work+0x1e8/0x650
   [<ffffffff8106e05e>] worker_thread+0x4e/0x450
   [<ffffffff810746af>] kthread+0xef/0x110
   [<ffffffff8185980f>] ret_from_fork+0x3f/0x70

Fix it by updating add_timer_on() to perform proper migration as
__mod_timer() does.

Reported-and-tested-by: Jeff Layton <jlayton@poochiereds.net>
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Chris Worley <chris.worley@primarydata.com>
Cc: bfields@fieldses.org
Cc: Michael Skralivetsky <michael.skralivetsky@primarydata.com>
Cc: Trond Myklebust <trond.myklebust@primarydata.com>
Cc: Shaohua Li <shli@fb.com>
Cc: Jeff Layton <jlayton@poochiereds.net>
Cc: kernel-team@fb.com
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/20151029103113.2f893924@tlielax.poochiereds.net
Link: http://lkml.kernel.org/r/20151104171533.GI5749@mtj.duckdns.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> ( backport for 3.18 )
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoRevert "usb: hub: do not clear BOS field during reset device"
Greg Kroah-Hartman [Sat, 20 Feb 2016 22:19:34 +0000 (14:19 -0800)]
Revert "usb: hub: do not clear BOS field during reset device"

This reverts commit f9b3d78ac42bda9cd30e4c6d0149dba7067c402c.

Tony writes:

This upstream commit is causing an oops:
d8f00cd685f5 ("usb: hub: do not clear BOS field during reset device")

This patch has already been included in several -stable kernels.  Here
are the affected kernels:
4.5.0-rc4 (current git)
4.4.2
4.3.6 (currently in review)
4.1.18
3.18.27
3.14.61

How to reproduce the problem:
Boot kernel with slub debugging enabled (otherwise memory corruption
will cause random oopses later instead of immediately)
Plug in USB 3.0 disk to xhci USB 3.0 port
dd if=/dev/sdc of=/dev/null bs=65536
(where /dev/sdc is the USB 3.0 disk)
Unplug USB cable while dd is still going
Oops is immediate:

Reported-by: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoLinux 3.18.33 v3.18.33
Sasha Levin [Wed, 11 May 2016 04:09:19 +0000 (00:09 -0400)]
Linux 3.18.33

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agox86 EDAC, sb_edac.c: Repair damage introduced when "fixing" channel address
Tony Luck [Thu, 14 Apr 2016 17:21:52 +0000 (10:21 -0700)]
x86 EDAC, sb_edac.c: Repair damage introduced when "fixing" channel address

[ Upstream commit ff15e95c82768d589957dbb17d7eb7dba7904659 ]

In commit:

  eb1af3b71f9d ("Fix computation of channel address")

I switched the "sck_way" variable from holding the log2 value read
from the h/w to instead be the actual number. Unfortunately it
is needed in log2 form when used to shift the address.

Tested-by: Patrick Geary <patrickg@supermicro.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Acked-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Cc: Aristeu Rozanski <arozansk@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-edac@vger.kernel.org
Cc: stable@vger.kernel.org
Fixes: eb1af3b71f9d ("Fix computation of channel address")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agox86/mm/xen: Suppress hugetlbfs in PV guests
Jan Beulich [Thu, 21 Apr 2016 06:27:04 +0000 (00:27 -0600)]
x86/mm/xen: Suppress hugetlbfs in PV guests

[ Upstream commit 103f6112f253017d7062cd74d17f4a514ed4485c ]

Huge pages are not normally available to PV guests. Not suppressing
hugetlbfs use results in an endless loop of page faults when user mode
code tries to access a hugetlbfs mapped area (since the hypervisor
denies such PTEs to be created, but error indications can't be
propagated out of xen_set_pte_at(), just like for various of its
siblings), and - once killed in an oops like this:

  kernel BUG at .../fs/hugetlbfs/inode.c:428!
  invalid opcode: 0000 [#1] SMP
  ...
  RIP: e030:[<ffffffff811c333b>]  [<ffffffff811c333b>] remove_inode_hugepages+0x25b/0x320
  ...
  Call Trace:
   [<ffffffff811c3415>] hugetlbfs_evict_inode+0x15/0x40
   [<ffffffff81167b3d>] evict+0xbd/0x1b0
   [<ffffffff8116514a>] __dentry_kill+0x19a/0x1f0
   [<ffffffff81165b0e>] dput+0x1fe/0x220
   [<ffffffff81150535>] __fput+0x155/0x200
   [<ffffffff81079fc0>] task_work_run+0x60/0xa0
   [<ffffffff81063510>] do_exit+0x160/0x400
   [<ffffffff810637eb>] do_group_exit+0x3b/0xa0
   [<ffffffff8106e8bd>] get_signal+0x1ed/0x470
   [<ffffffff8100f854>] do_signal+0x14/0x110
   [<ffffffff810030e9>] prepare_exit_to_usermode+0xe9/0xf0
   [<ffffffff814178a5>] retint_user+0x8/0x13

This is CVE-2016-3961 / XSA-174.

Reported-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Juergen Gross <JGross@suse.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luis R. Rodriguez <mcgrof@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Toshi Kani <toshi.kani@hp.com>
Cc: stable@vger.kernel.org
Cc: xen-devel <xen-devel@lists.xenproject.org>
Link: http://lkml.kernel.org/r/57188ED802000078000E431C@prv-mh.provo.novell.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agos390/hugetlb: add hugepages_supported define
Dominik Dingel [Fri, 17 Jul 2015 23:23:39 +0000 (16:23 -0700)]
s390/hugetlb: add hugepages_supported define

[ Upstream commit 7f9be77555bb2e52de84e9dddf7b4eb20cc6e171 ]

On s390 we only can enable hugepages if the underlying hardware/hypervisor
also does support this.  Common code now would assume this to be
signaled by setting HPAGE_SHIFT to 0.  But on s390, where we only
support one hugepage size, there is a link between HPAGE_SHIFT and
pageblock_order.

So instead of setting HPAGE_SHIFT to 0, we will implement the check for
the hardware capability.

Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agomm: hugetlb: allow hugepages_supported to be architecture specific
Dominik Dingel [Fri, 17 Jul 2015 23:23:37 +0000 (16:23 -0700)]
mm: hugetlb: allow hugepages_supported to be architecture specific

[ Upstream commit 2531c8cf56a640cd7d17057df8484e570716a450 ]

s390 has a constant hugepage size, by setting HPAGE_SHIFT we also change
e.g. the pageblock_order, which should be independent in respect to
hugepage support.

With this patch every architecture is free to define how to check
for hugepage support.

Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agodrm: Loongson-3 doesn't fully support wc memory
Huacai Chen [Tue, 19 Apr 2016 11:19:11 +0000 (19:19 +0800)]
drm: Loongson-3 doesn't fully support wc memory

[ Upstream commit 221004c66a58949a0f25c937a6789c0839feb530 ]

Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: stable@vger.kernel.org
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agodrm/radeon: forbid mapping of userptr bo through radeon device file
Jérôme Glisse [Tue, 19 Apr 2016 13:07:50 +0000 (09:07 -0400)]
drm/radeon: forbid mapping of userptr bo through radeon device file

[ Upstream commit b5dcec693f87cb8475f2291c0075b2422addd3d6 ]

Allowing userptr bo which are basicly a list of page from some vma
(so either anonymous page or file backed page) would lead to serious
corruption of kernel structures and counters (because we overwrite
the page->mapping field when mapping buffer).

This will already block if the buffer was populated before anyone does
try to mmap it because then TTM_PAGE_FLAG_SG would be set in in the
ttm_tt flags. But that flag is check before ttm_tt_populate in the ttm
vm fault handler.

So to be safe just add a check to verify_access() callback.

Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoALSA: pcxhr: Fix missing mutex unlock
Takashi Iwai [Thu, 21 Apr 2016 15:37:54 +0000 (17:37 +0200)]
ALSA: pcxhr: Fix missing mutex unlock

[ Upstream commit 67f3754b51f22b18c4820fb84062f658c30e8644 ]

The commit [9bef72bdb26e: ALSA: pcxhr: Use nonatomic PCM ops]
converted to non-atomic PCM ops, but shamelessly with an unbalanced
mutex locking, which leads to the hangup easily.  Fix it.

Fixes: 9bef72bdb26e ('ALSA: pcxhr: Use nonatomic PCM ops')
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=116441
Cc: <stable@vger.kernel.org> # 3.18+
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agofutex: Handle unlock_pi race gracefully
Sebastian Andrzej Siewior [Fri, 15 Apr 2016 12:35:39 +0000 (14:35 +0200)]
futex: Handle unlock_pi race gracefully

[ Upstream commit 89e9e66ba1b3bde9d8ea90566c2aee20697ad681 ]

If userspace calls UNLOCK_PI unconditionally without trying the TID -> 0
transition in user space first then the user space value might not have the
waiters bit set. This opens the following race:

CPU0                 CPU1
uval = get_user(futex)
    lock(hb)
lock(hb)
    futex |= FUTEX_WAITERS
    ....
    unlock(hb)

cmpxchg(futex, uval, newval)

So the cmpxchg fails and returns -EINVAL to user space, which is wrong because
the futex value is valid.

To handle this (yes, yet another) corner case gracefully, check for a flag
change and retry.

[ tglx: Massaged changelog and slightly reworked implementation ]

Fixes: ccf9e6a80d9e ("futex: Make unlock_pi more robust")
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: stable@vger.kernel.org
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Darren Hart <dvhart@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1460723739-5195-1-git-send-email-bigeasy@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoRevert "drm/radeon: disable runtime pm on PX laptops without dGPU power control"
Alex Deucher [Mon, 18 Apr 2016 15:19:19 +0000 (11:19 -0400)]
Revert "drm/radeon: disable runtime pm on PX laptops without dGPU power control"

[ Upstream commit bfaddd9fc8ac048b99475f000dbef6f08297417f ]

This reverts commit e64c952efb8e0c15ae82cec8e455ab4910690ef1.

ATPX is the ACPI method for controlling AMD PowerXpress laptops.
There are flags to indicate which methods are supported.  If
the dGPU power down flag is not supported, the driver needs to
implement the dGPU power down manually.  We had previously
always forced the driver to assume the ATPX dGPU power down
was present, but this causes problems on boards where it is
not, leading to GPU hangs when attempting to power down the
dGPU.  Manual dGPU power down is not currently supported in
the Linux driver.  Some laptops indicate that the ATPX
dGPU power down method is not present, but it actually
apparently is.  I'm not sure if this is a bios bug and it should
be set or if there is a reason it was unset and the method should
not be used.  This is not an issue on other OSes since both the
ATPX and the manual driver power down methods are supported.

This is apparently fairly widespread, so just revert for now.

bugs:
https://bugzilla.kernel.org/show_bug.cgi?id=115321
https://bugzilla.kernel.org/show_bug.cgi?id=116581
https://bugzilla.kernel.org/show_bug.cgi?id=116251

Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agodrm/radeon: add a quirk for a XFX R9 270X
Alex Deucher [Thu, 14 Apr 2016 18:15:16 +0000 (14:15 -0400)]
drm/radeon: add a quirk for a XFX R9 270X

[ Upstream commit bcb31eba4a4ea356fd61cbd5dec5511c3883f57e ]

bug:
https://bugs.freedesktop.org/show_bug.cgi?id=76490

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agodrm/radeon: add another R7 370 quirk
Alex Deucher [Mon, 28 Mar 2016 14:16:40 +0000 (10:16 -0400)]
drm/radeon: add another R7 370 quirk

[ Upstream commit a64663d9870364bd2a2df62bf0d3a9fbe5ea62a8 ]

bug:
https://bugzilla.kernel.org/show_bug.cgi?id=115291

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agodrm/radeon: add quirk for ASUS R7 370
Alex Deucher [Fri, 2 Oct 2015 20:12:07 +0000 (16:12 -0400)]
drm/radeon: add quirk for ASUS R7 370

[ Upstream commit 2b02ec79004388a8c65e227bc289ed891b5ac8c6 ]

Bug:
https://bugs.freedesktop.org/show_bug.cgi?id=92260

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agodrm/radeon: add quirk for MSI R7 370
Maxim Sheviakov [Wed, 23 Sep 2015 21:10:51 +0000 (17:10 -0400)]
drm/radeon: add quirk for MSI R7 370

[ Upstream commit e78654799135a788a941bacad3452fbd7083e518 ]

Just adds the quirk for MSI R7 370 Armor 2X
Bug:
https://bugs.freedesktop.org/show_bug.cgi?id=91294

Signed-off-by: Maxim Sheviakov <mrader3940@yandex.ru>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agopowerpc: Update cpu_user_features2 in scan_features()
Anton Blanchard [Fri, 15 Apr 2016 02:07:24 +0000 (12:07 +1000)]
powerpc: Update cpu_user_features2 in scan_features()

[ Upstream commit beff82374b259d726e2625ec6c518a5f2613f0ae ]

scan_features() updates cpu_user_features but not cpu_user_features2.

Amongst other things, cpu_user_features2 contains the user TM feature
bits which we must keep in sync with the kernel TM feature bit.

Signed-off-by: Anton Blanchard <anton@samba.org>
Cc: stable@vger.kernel.org
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agopowerpc: scan_features() updates incorrect bits for REAL_LE
Anton Blanchard [Fri, 15 Apr 2016 02:06:13 +0000 (12:06 +1000)]
powerpc: scan_features() updates incorrect bits for REAL_LE

[ Upstream commit 6997e57d693b07289694239e52a10d2f02c3a46f ]

The REAL_LE feature entry in the ibm_pa_feature struct is missing an MMU
feature value, meaning all the remaining elements initialise the wrong
values.

This means instead of checking for byte 5, bit 0, we check for byte 0,
bit 0, and then we incorrectly set the CPU feature bit as well as MMU
feature bit 1 and CPU user feature bits 0 and 2 (5).

Checking byte 0 bit 0 (IBM numbering), means we're looking at the
"Memory Management Unit (MMU)" feature - ie. does the CPU have an MMU.
In practice that bit is set on all platforms which have the property.

This means we set CPU_FTR_REAL_LE always. In practice that seems not to
matter because all the modern cpus which have this property also
implement REAL_LE, and we've never needed to disable it.

We're also incorrectly setting MMU feature bit 1, which is:

  #define MMU_FTR_TYPE_8xx 0x00000002

Luckily the only place that looks for MMU_FTR_TYPE_8xx is in Book3E
code, which can't run on the same cpus as scan_features(). So this also
doesn't matter in practice.

Finally in the CPU user feature mask, we're setting bits 0 and 2. Bit 2
is not currently used, and bit 0 is:

  #define PPC_FEATURE_PPC_LE 0x00000001

Which says the CPU supports the old style "PPC Little Endian" mode.
Again this should be harmless in practice as no 64-bit CPUs implement
that mode.

Fix the code by adding the missing initialisation of the MMU feature.

Also add a comment marking CPU user feature bit 2 (0x4) as reserved. It
would be unsafe to start using it as old kernels incorrectly set it.

Fixes: 44ae3ab3358e ("powerpc: Free up some CPU feature bits by moving out MMU-related features")
Signed-off-by: Anton Blanchard <anton@samba.org>
Cc: stable@vger.kernel.org
[mpe: Flesh out changelog, add comment reserving 0x4]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agopowerpc: Disable CPU_FTR_TM if TM is disabled by firmware
Aneesh Kumar K.V [Sun, 2 Nov 2014 14:32:42 +0000 (20:02 +0530)]
powerpc: Disable CPU_FTR_TM if TM is disabled by firmware

[ Upstream commit 9e819963b45f79e87f5a8c44960a66c0727c80e6 ]

Firmware is allowed to communicate to us via the "ibm,pa-features" property
that TM (Transactional Memory) support is disabled.

Currently this doesn't happen on any platform we're aware of, but we should
honor it anyway.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agocrypto: ccp - Prevent information leakage on export
Tom Lendacky [Wed, 13 Apr 2016 15:52:25 +0000 (10:52 -0500)]
crypto: ccp - Prevent information leakage on export

[ Upstream commit f709b45ec461b548c41a00044dba1f1b572783bf ]

Prevent information from leaking to userspace by doing a memset to 0 of
the export state structure before setting the structure values and copying
it. This prevents un-initialized padding areas from being copied into the
export area.

Cc: <stable@vger.kernel.org> # 3.14.x-
Reported-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agocrypto: sha1-mb - use corrcet pointer while completing jobs
Xiaodong Liu [Tue, 12 Apr 2016 09:45:51 +0000 (09:45 +0000)]
crypto: sha1-mb - use corrcet pointer while completing jobs

[ Upstream commit 0851561d9c965df086ef8a53f981f5f95a57c2c8 ]

In sha_complete_job, incorrect mcryptd_hash_request_ctx pointer is used
when check and complete other jobs. If the memory of first completed req
is freed, while still completing other jobs in the func, kernel will
crash since NULL pointer is assigned to RIP.

Cc: <stable@vger.kernel.org>
Signed-off-by: Xiaodong Liu <xiaodong.liu@intel.com>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agonl80211: check netlink protocol in socket release notification
Dmitry Ivanov [Wed, 6 Apr 2016 14:23:18 +0000 (17:23 +0300)]
nl80211: check netlink protocol in socket release notification

[ Upstream commit 8f815cdde3e550e10c2736990d791f60c2ce43eb ]

A non-privileged user can create a netlink socket with the same port_id as
used by an existing open nl80211 netlink socket (e.g. as used by a hostapd
process) with a different protocol number.

Closing this socket will then lead to the notification going to nl80211's
socket release notification handler, and possibly cause an action such as
removing a virtual interface.

Fix this issue by checking that the netlink protocol is NETLINK_GENERIC.
Since generic netlink has no notifier chain of its own, we can't fix the
problem more generically.

Fixes: 026331c4d9b5 ("cfg80211/mac80211: allow registering for and sending action frames")
Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Ivanov <dima@ubnt.com>
[rewrite commit message]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoInput: gtco - fix crash on detecting device without endpoints
Vladis Dronov [Thu, 31 Mar 2016 17:53:42 +0000 (10:53 -0700)]
Input: gtco - fix crash on detecting device without endpoints

[ Upstream commit 162f98dea487206d9ab79fc12ed64700667a894d ]

The gtco driver expects at least one valid endpoint. If given malicious
descriptors that specify 0 for the number of endpoints, it will crash in
the probe function. Ensure there is at least one endpoint on the interface
before using it.

Also let's fix a minor coding style issue.

The full correct report of this issue can be found in the public
Red Hat Bugzilla:

https://bugzilla.redhat.com/show_bug.cgi?id=1283385

Reported-by: Ralf Spenneberg <ralf@spenneberg.net>
Signed-off-by: Vladis Dronov <vdronov@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoLinux 3.18.32 v3.18.32
Sasha Levin [Sat, 23 Apr 2016 20:46:40 +0000 (16:46 -0400)]
Linux 3.18.32

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agotcp_cubic: do not set epoch_start in the future
Eric Dumazet [Thu, 17 Sep 2015 15:38:00 +0000 (08:38 -0700)]
tcp_cubic: do not set epoch_start in the future

[ Upstream commit c2e7204d180f8efc80f27959ca9cf16fa17f67db ]

Tracking idle time in bictcp_cwnd_event() is imprecise, as epoch_start
is normally set at ACK processing time, not at send time.

Doing a proper fix would need to add an additional state variable,
and does not seem worth the trouble, given CUBIC bug has been there
forever before Jana noticed it.

Let's simply not set epoch_start in the future, otherwise
bictcp_update() could overflow and CUBIC would again
grow cwnd too fast.

This was detected thanks to a packetdrill test Neal wrote that was flaky
before applying this fix.

Fixes: 30927520dbae ("tcp_cubic: better follow cubic curve after idle period")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Cc: Jana Iyengar <jri@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoBtrfs: fix list transaction->pending_ordered corruption
Filipe Manana [Fri, 3 Jul 2015 19:30:34 +0000 (20:30 +0100)]
Btrfs: fix list transaction->pending_ordered corruption

[ Upstream commit d3efe08400317888f559bbedf0e42cd31575d0ef ]

When we call btrfs_commit_transaction(), we splice the list "ordered"
of our transaction handle into the transaction's "pending_ordered"
list, but we don't re-initialize the "ordered" list of our transaction
handle, this means it still points to the same elements it used to
before the splice. Then we check if the current transaction's state is
>= TRANS_STATE_COMMIT_START and if it is we end up calling
btrfs_end_transaction() which simply splices again the "ordered" list
of our handle into the transaction's "pending_ordered" list, leaving
multiple pointers to the same ordered extents which results in list
corruption when we are iterating, removing and freeing ordered extents
at btrfs_wait_pending_ordered(), resulting in access to dangling
pointers / use-after-free issues.
Similarly, btrfs_end_transaction() can end up in some cases calling
btrfs_commit_transaction(), and both did a list splice of the transaction
handle's "ordered" list into the transaction's "pending_ordered" without
re-initializing the handle's "ordered" list, resulting in exactly the
same problem.

This produces the following warning on a kernel with linked list
debugging enabled:

[109749.265416] ------------[ cut here ]------------
[109749.266410] WARNING: CPU: 7 PID: 324 at lib/list_debug.c:59 __list_del_entry+0x5a/0x98()
[109749.267969] list_del corruption. prev->next should be ffff8800ba087e20, but was fffffff8c1f7c35d
(...)
[109749.287505] Call Trace:
[109749.288135]  [<ffffffff8145f077>] dump_stack+0x4f/0x7b
[109749.298080]  [<ffffffff81095de5>] ? console_unlock+0x356/0x3a2
[109749.331605]  [<ffffffff8104b3b0>] warn_slowpath_common+0xa1/0xbb
[109749.334849]  [<ffffffff81260642>] ? __list_del_entry+0x5a/0x98
[109749.337093]  [<ffffffff8104b410>] warn_slowpath_fmt+0x46/0x48
[109749.337847]  [<ffffffff81260642>] __list_del_entry+0x5a/0x98
[109749.338678]  [<ffffffffa053e8bf>] btrfs_wait_pending_ordered+0x46/0xdb [btrfs]
[109749.340145]  [<ffffffffa058a65f>] ? __btrfs_run_delayed_items+0x149/0x163 [btrfs]
[109749.348313]  [<ffffffffa054077d>] btrfs_commit_transaction+0x36b/0xa10 [btrfs]
[109749.349745]  [<ffffffff81087310>] ? trace_hardirqs_on+0xd/0xf
[109749.350819]  [<ffffffffa055370d>] btrfs_sync_file+0x36f/0x3fc [btrfs]
[109749.351976]  [<ffffffff8118ec98>] vfs_fsync_range+0x8f/0x9e
[109749.360341]  [<ffffffff8118ecc3>] vfs_fsync+0x1c/0x1e
[109749.368828]  [<ffffffff8118ee1d>] do_fsync+0x34/0x4e
[109749.369790]  [<ffffffff8118f045>] SyS_fsync+0x10/0x14
[109749.370925]  [<ffffffff81465197>] system_call_fastpath+0x12/0x6f
[109749.382274] ---[ end trace 48e0d07f7c03d95a ]---

On a non-debug kernel this leads to invalid memory accesses, causing a
crash. Fix this by using list_splice_init() instead of list_splice() in
btrfs_commit_transaction() and btrfs_end_transaction().

Cc: stable@vger.kernel.org
Fixes: 50d9aa99bd35 ("Btrfs: make sure logged extents complete in the current transaction V3"
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoCorrect backport of fa3c776 ("Thermal: Ignore invalid trip points")
Mike Galbraith [Sat, 23 Apr 2016 00:38:23 +0000 (20:38 -0400)]
Correct backport of fa3c776 ("Thermal: Ignore invalid trip points")

Backport of 81ad4276b505e987dd8ebbdf63605f92cd172b52 failed to adjust
for intervening ->get_trip_temp() argument type change, thus causing
stack protector to panic.

drivers/thermal/thermal_core.c: In function ‘thermal_zone_device_register’:
drivers/thermal/thermal_core.c:1569:41: warning: passing argument 3 of
‘tz->ops->get_trip_temp’ from incompatible pointer type [-Wincompatible-pointer-types]
   if (tz->ops->get_trip_temp(tz, count, &trip_temp))
                                         ^
drivers/thermal/thermal_core.c:1569:41: note: expected ‘long unsigned int *’
but argument is of type ‘int *’

CC: <stable@vger.kernel.org> #3.18,#4.1
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
9 years agousb: musb: cppi41: correct the macro name EP_MODE_AUTOREG_*
Bin Liu [Mon, 26 Jan 2015 22:22:06 +0000 (16:22 -0600)]
usb: musb: cppi41: correct the macro name EP_MODE_AUTOREG_*

[ Upstream commit 0149b07a9e28b0d8bd2fc1c238ffe7d530c2673f ]

The macro EP_MODE_AUTOREG_* should be called EP_MODE_AUTOREQ_*, as they
are used for register AUTOREQ.

Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agotcp_cubic: better follow cubic curve after idle period
Eric Dumazet [Thu, 10 Sep 2015 04:55:07 +0000 (21:55 -0700)]
tcp_cubic: better follow cubic curve after idle period

[ Upstream commit 30927520dbae297182990bb21d08762bcc35ce1d ]

Jana Iyengar found an interesting issue on CUBIC :

The epoch is only updated/reset initially and when experiencing losses.
The delta "t" of now - epoch_start can be arbitrary large after app idle
as well as the bic_target. Consequentially the slope (inverse of
ca->cnt) would be really large, and eventually ca->cnt would be
lower-bounded in the end to 2 to have delayed-ACK slow-start behavior.

This particularly shows up when slow_start_after_idle is disabled
as a dangerous cwnd inflation (1.5 x RTT) after few seconds of idle
time.

Jana initial fix was to reset epoch_start if app limited,
but Neal pointed out it would ask the CUBIC algorithm to recalculate the
curve so that we again start growing steeply upward from where cwnd is
now (as CUBIC does just after a loss). Ideally we'd want the cwnd growth
curve to be the same shape, just shifted later in time by the amount of
the idle period.

Reported-by: Jana Iyengar <jri@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: Sangtae Ha <sangtae.ha@gmail.com>
Cc: Lawrence Brakmo <lawrence@brakmo.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agousb: hcd: out of bounds access in for_each_companion
Robert Dobrowolski [Thu, 24 Mar 2016 10:30:07 +0000 (03:30 -0700)]
usb: hcd: out of bounds access in for_each_companion

[ Upstream commit e86103a75705c7c530768f4ffaba74cf382910f2 ]

On BXT platform Host Controller and Device Controller figure as
same PCI device but with different device function. HCD should
not pass data to Device Controller but only to Host Controllers.
Checking if companion device is Host Controller, otherwise skip.

Cc: <stable@vger.kernel.org>
Signed-off-by: Robert Dobrowolski <robert.dobrowolski@linux.intel.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoUSB: uas: Add a new NO_REPORT_LUNS quirk
Hans de Goede [Tue, 12 Apr 2016 10:27:09 +0000 (12:27 +0200)]
USB: uas: Add a new NO_REPORT_LUNS quirk

[ Upstream commit 1363074667a6b7d0507527742ccd7bbed5e3ceaa ]

Add a new NO_REPORT_LUNS quirk and set it for Seagate drives with
an usb-id of: 0bc2:331a, as these will fail to respond to a
REPORT_LUNS command.

Cc: stable@vger.kernel.org
Reported-and-tested-by: David Webb <djw@noc.ac.uk>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoxhci: fix 10 second timeout on removal of PCI hotpluggable xhci controllers
Mathias Nyman [Fri, 8 Apr 2016 13:25:10 +0000 (16:25 +0300)]
xhci: fix 10 second timeout on removal of PCI hotpluggable xhci controllers

[ Upstream commit 98d74f9ceaefc2b6c4a6440050163a83be0abede ]

PCI hotpluggable xhci controllers such as some Alpine Ridge solutions will
remove the xhci controller from the PCI bus when the last USB device is
disconnected.

Add a flag to indicate that the host is being removed to avoid queueing
configure_endpoint commands for the dropped endpoints.
For PCI hotplugged controllers this will prevent 5 second command timeouts
For static xhci controllers the configure_endpoint command is not needed
in the removal case as everything will be returned, freed, and the
controller is reset.

For now the flag is only set for PCI connected host controllers.

Cc: <stable@vger.kernel.org>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agousb: xhci: fix xhci locking up during hcd remove
Roger Quadros [Fri, 29 May 2015 14:01:49 +0000 (17:01 +0300)]
usb: xhci: fix xhci locking up during hcd remove

[ Upstream commit ad6b1d914a9e07f3b9a9ae3396f3c840d0070539 ]

The problem seems to be that if a new device is detected
while we have already removed the shared HCD, then many of the
xhci operations (e.g.  xhci_alloc_dev(), xhci_setup_device())
hang as command never completes.

I don't think XHCI can operate without the shared HCD as we've
already called xhci_halt() in xhci_only_stop_hcd() when shared HCD
goes away. We need to prevent new commands from being queued
not only when HCD is dying but also when HCD is halted.

The following lockup was detected while testing the otg state
machine.

[  178.199951] xhci-hcd xhci-hcd.0.auto: xHCI Host Controller
[  178.205799] xhci-hcd xhci-hcd.0.auto: new USB bus registered, assigned bus number 1
[  178.214458] xhci-hcd xhci-hcd.0.auto: hcc params 0x0220f04c hci version 0x100 quirks 0x00010010
[  178.223619] xhci-hcd xhci-hcd.0.auto: irq 400, io mem 0x48890000
[  178.230677] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
[  178.237796] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[  178.245358] usb usb1: Product: xHCI Host Controller
[  178.250483] usb usb1: Manufacturer: Linux 4.0.0-rc1-00024-g6111320 xhci-hcd
[  178.257783] usb usb1: SerialNumber: xhci-hcd.0.auto
[  178.267014] hub 1-0:1.0: USB hub found
[  178.272108] hub 1-0:1.0: 1 port detected
[  178.278371] xhci-hcd xhci-hcd.0.auto: xHCI Host Controller
[  178.284171] xhci-hcd xhci-hcd.0.auto: new USB bus registered, assigned bus number 2
[  178.294038] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003
[  178.301183] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[  178.308776] usb usb2: Product: xHCI Host Controller
[  178.313902] usb usb2: Manufacturer: Linux 4.0.0-rc1-00024-g6111320 xhci-hcd
[  178.321222] usb usb2: SerialNumber: xhci-hcd.0.auto
[  178.329061] hub 2-0:1.0: USB hub found
[  178.333126] hub 2-0:1.0: 1 port detected
[  178.567585] dwc3 48890000.usb: usb_otg_start_host 0
[  178.572707] xhci-hcd xhci-hcd.0.auto: remove, state 4
[  178.578064] usb usb2: USB disconnect, device number 1
[  178.586565] xhci-hcd xhci-hcd.0.auto: USB bus 2 deregistered
[  178.592585] xhci-hcd xhci-hcd.0.auto: remove, state 1
[  178.597924] usb usb1: USB disconnect, device number 1
[  178.603248] usb 1-1: new high-speed USB device number 2 using xhci-hcd
[  190.597337] INFO: task kworker/u4:0:6 blocked for more than 10 seconds.
[  190.604273]       Not tainted 4.0.0-rc1-00024-g6111320 #1058
[  190.610228] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  190.618443] kworker/u4:0    D c05c0ac0     0     6      2 0x00000000
[  190.625120] Workqueue: usb_otg usb_otg_work
[  190.629533] [<c05c0ac0>] (__schedule) from [<c05c10ac>] (schedule+0x34/0x98)
[  190.636915] [<c05c10ac>] (schedule) from [<c05c1318>] (schedule_preempt_disabled+0xc/0x10)
[  190.645591] [<c05c1318>] (schedule_preempt_disabled) from [<c05c23d0>] (mutex_lock_nested+0x1ac/0x3fc)
[  190.655353] [<c05c23d0>] (mutex_lock_nested) from [<c046cf8c>] (usb_disconnect+0x3c/0x208)
[  190.664043] [<c046cf8c>] (usb_disconnect) from [<c0470cf0>] (_usb_remove_hcd+0x98/0x1d8)
[  190.672535] [<c0470cf0>] (_usb_remove_hcd) from [<c0485da8>] (usb_otg_start_host+0x50/0xf4)
[  190.681299] [<c0485da8>] (usb_otg_start_host) from [<c04849a4>] (otg_set_protocol+0x5c/0xd0)
[  190.690153] [<c04849a4>] (otg_set_protocol) from [<c0484b88>] (otg_set_state+0x170/0xbfc)
[  190.698735] [<c0484b88>] (otg_set_state) from [<c0485740>] (otg_statemachine+0x12c/0x470)
[  190.707326] [<c0485740>] (otg_statemachine) from [<c0053c84>] (process_one_work+0x1b4/0x4a0)
[  190.716162] [<c0053c84>] (process_one_work) from [<c00540f8>] (worker_thread+0x154/0x44c)
[  190.724742] [<c00540f8>] (worker_thread) from [<c0058f88>] (kthread+0xd4/0xf0)
[  190.732328] [<c0058f88>] (kthread) from [<c000e810>] (ret_from_fork+0x14/0x24)
[  190.739898] 5 locks held by kworker/u4:0/6:
[  190.744274]  #0:  ("%s""usb_otg"){.+.+.+}, at: [<c0053bf4>] process_one_work+0x124/0x4a0
[  190.752799]  #1:  ((&otgd->work)){+.+.+.}, at: [<c0053bf4>] process_one_work+0x124/0x4a0
[  190.761326]  #2:  (&otgd->fsm.lock){+.+.+.}, at: [<c048562c>] otg_statemachine+0x18/0x470
[  190.769934]  #3:  (usb_bus_list_lock){+.+.+.}, at: [<c0470ce8>] _usb_remove_hcd+0x90/0x1d8
[  190.778635]  #4:  (&dev->mutex){......}, at: [<c046cf8c>] usb_disconnect+0x3c/0x208
[  190.786700] INFO: task kworker/1:0:14 blocked for more than 10 seconds.
[  190.793633]       Not tainted 4.0.0-rc1-00024-g6111320 #1058
[  190.799567] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  190.807783] kworker/1:0     D c05c0ac0     0    14      2 0x00000000
[  190.814457] Workqueue: usb_hub_wq hub_event
[  190.818866] [<c05c0ac0>] (__schedule) from [<c05c10ac>] (schedule+0x34/0x98)
[  190.826252] [<c05c10ac>] (schedule) from [<c05c4e40>] (schedule_timeout+0x13c/0x1ec)
[  190.834377] [<c05c4e40>] (schedule_timeout) from [<c05c19f0>] (wait_for_common+0xbc/0x150)
[  190.843062] [<c05c19f0>] (wait_for_common) from [<bf068a3c>] (xhci_setup_device+0x164/0x5cc [xhci_hcd])
[  190.852986] [<bf068a3c>] (xhci_setup_device [xhci_hcd]) from [<c046b7f4>] (hub_port_init+0x3f4/0xb10)
[  190.862667] [<c046b7f4>] (hub_port_init) from [<c046eb64>] (hub_event+0x704/0x1018)
[  190.870704] [<c046eb64>] (hub_event) from [<c0053c84>] (process_one_work+0x1b4/0x4a0)
[  190.878919] [<c0053c84>] (process_one_work) from [<c00540f8>] (worker_thread+0x154/0x44c)
[  190.887503] [<c00540f8>] (worker_thread) from [<c0058f88>] (kthread+0xd4/0xf0)
[  190.895076] [<c0058f88>] (kthread) from [<c000e810>] (ret_from_fork+0x14/0x24)
[  190.902650] 5 locks held by kworker/1:0/14:
[  190.907023]  #0:  ("usb_hub_wq"){.+.+.+}, at: [<c0053bf4>] process_one_work+0x124/0x4a0
[  190.915454]  #1:  ((&hub->events)){+.+.+.}, at: [<c0053bf4>] process_one_work+0x124/0x4a0
[  190.924070]  #2:  (&dev->mutex){......}, at: [<c046e490>] hub_event+0x30/0x1018
[  190.931768]  #3:  (&port_dev->status_lock){+.+.+.}, at: [<c046eb50>] hub_event+0x6f0/0x1018
[  190.940558]  #4:  (&bus->usb_address0_mutex){+.+.+.}, at: [<c046b458>] hub_port_init+0x58/0xb10

Signed-off-by: Roger Quadros <rogerq@ti.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agousb: xhci: fix wild pointers in xhci_mem_cleanup
Lu Baolu [Fri, 8 Apr 2016 13:25:09 +0000 (16:25 +0300)]
usb: xhci: fix wild pointers in xhci_mem_cleanup

[ Upstream commit 71504062a7c34838c3fccd92c447f399d3cb5797 ]

This patch fixes some wild pointers produced by xhci_mem_cleanup.
These wild pointers will cause system crash if xhci_mem_cleanup()
is called twice.

Reported-and-tested-by: Pengcheng Li <lpc.li@hisilicon.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Cc: stable@vger.kernel.org
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agousb: host: xhci: add a new quirk XHCI_NO_64BIT_SUPPORT
Yoshihiro Shimoda [Fri, 8 Apr 2016 13:25:07 +0000 (16:25 +0300)]
usb: host: xhci: add a new quirk XHCI_NO_64BIT_SUPPORT

[ Upstream commit 0a380be8233dbf8dd20795b801c5d5d5ef3992f7 ]

On some xHCI controllers (e.g. R-Car SoCs), the AC64 bit (bit 0) of
HCCPARAMS1 is set to 1. However, the xHCs don't support 64-bit
address memory pointers actually. So, in this case, this driver should
call dma_set_coherent_mask(dev, DMA_BIT_MASK(32)) in xhci_gen_setup().
Otherwise, the xHCI controller will be died after a usb device is
connected if it runs on above 4GB physical memory environment.

So, this patch adds a new quirk XHCI_NO_64BIT_SUPPORT to resolve
such an issue.

Cc: <stable@vger.kernel.org>
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoxhci: resume USB 3 roothub first
Mathias Nyman [Fri, 8 Apr 2016 13:25:06 +0000 (16:25 +0300)]
xhci: resume USB 3 roothub first

[ Upstream commit 671ffdff5b13314b1fc65d62cf7604b873fb5dc4 ]

Give USB3 devices a better chance to enumerate at USB 3 speeds if
they are connected to a suspended host.
Solves an issue with NEC uPD720200 host hanging when partially
enumerating a USB3 device as USB2 after host controller runtime resume.

Cc: <stable@vger.kernel.org>
Tested-by: Mike Murdoch <main.haarp@gmail.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agousb: xhci: applying XHCI_PME_STUCK_QUIRK to Intel BXT B0 host
Rafal Redzimski [Fri, 8 Apr 2016 13:25:05 +0000 (16:25 +0300)]
usb: xhci: applying XHCI_PME_STUCK_QUIRK to Intel BXT B0 host

[ Upstream commit 0d46faca6f887a849efb07c1655b5a9f7c288b45 ]

Broxton B0 also requires XHCI_PME_STUCK_QUIRK.
Adding PCI device ID for Broxton B and adding to quirk.

Cc: <stable@vger.kernel.org>
Signed-off-by: Rafal Redzimski <rafal.f.redzimski@intel.com>
Signed-off-by: Robert Dobrowolski <robert.dobrowolski@linux.intel.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agolib: lz4: fixed zram with lz4 on big endian machines
Rui Salvaterra [Sat, 9 Apr 2016 21:05:34 +0000 (22:05 +0100)]
lib: lz4: fixed zram with lz4 on big endian machines

[ Upstream commit 3e26a691fe3fe1e02a76e5bab0c143ace4b137b4 ]

Based on Sergey's test patch [1], this fixes zram with lz4 compression
on big endian cpus.

Note that the 64-bit preprocessor test is not a cleanup, it's part of
the fix, since those identifiers are bogus (for example, __ppc64__
isn't defined anywhere else in the kernel, which means we'd fall into
the 32-bit definitions on ppc64).

Tested on ppc64 with no regression on x86_64.

[1] http://marc.info/?l=linux-kernel&m=145994470805853&w=4

Cc: stable@vger.kernel.org
Suggested-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Rui Salvaterra <rsalvaterra@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agodmaengine: dw: fix master selection
Andy Shevchenko [Fri, 8 Apr 2016 13:22:17 +0000 (16:22 +0300)]
dmaengine: dw: fix master selection

[ Upstream commit 3fe6409c23e2bee4b2b1b6d671d2da8daa15271c ]

The commit 895005202987 ("dmaengine: dw: apply both HS interfaces and remove
slave_id usage") cleaned up the code to avoid usage of depricated slave_id
member of generic slave configuration.

Meanwhile it broke the master selection by removing important call to
dwc_set_masters() in ->device_alloc_chan_resources() which copied masters from
custom slave configuration to the internal channel structure.

Everything works until now since there is no customized connection of
DesignWare DMA IP to the bus, i.e. one bus and one or more masters are in use.
The configurations where 2 masters are connected to the different masters are
not working anymore. We are expecting one user of such configuration and need
to select masters properly. Besides that it is obviously a performance
regression since only one master is in use in multi-master configuration.

Select masters in accordance with what user asked for. Keep this patch in a form
more suitable for back porting.

We are safe to take necessary data in ->device_alloc_chan_resources() because
we don't support generic slave configuration embedded into custom one, and thus
the only way to provide such is to use the parameter to a filter function which
is called exactly before channel resource allocation.

While here, replase BUG_ON to less noisy dev_warn() and prevent channel
allocation in case of error.

Fixes: 895005202987 ("dmaengine: dw: apply both HS interfaces and remove slave_id usage")
Cc: stable@vger.kernel.org
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoALSA: usb-audio: Skip volume controls triggers hangup on Dell USB Dock
Kailang Yang [Tue, 12 Apr 2016 02:55:03 +0000 (10:55 +0800)]
ALSA: usb-audio: Skip volume controls triggers hangup on Dell USB Dock

[ Upstream commit adcdd0d5a1cb779f6d455ae70882c19c527627a8 ]

This is Dell usb dock audio workaround.
It was fixed the master volume keep lower.

[Some background: the patch essentially skips the controls of a couple
 of FU volumes.  Although the firmware exposes the dB and the value
 information via the usb descriptor, changing the values (we set the
 min volume as default) screws up the device.  Although this has been
 fixed in the newer firmware, the devices are shipped with the old
 firmware, thus we need the workaround in the driver side.  -- tiwai]

Signed-off-by: Kailang Yang <kailang@realtek.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoALSA: hda - fix front mic problem for a HP desktop
Hui Wang [Fri, 1 Apr 2016 03:00:15 +0000 (11:00 +0800)]
ALSA: hda - fix front mic problem for a HP desktop

[ Upstream commit e549d190f7b5f94e9ab36bd965028112914d010d ]

The front mic jack (pink color) can't detect any plug or unplug. After
applying this fix, both detecting function and recording function
work well.

BugLink: https://bugs.launchpad.net/bugs/1564712
Cc: stable@vger.kernel.org
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agokvm: x86: do not leak guest xcr0 into host interrupt handlers
David Matlack [Wed, 30 Mar 2016 19:24:47 +0000 (12:24 -0700)]
kvm: x86: do not leak guest xcr0 into host interrupt handlers

[ Upstream commit fc5b7f3bf1e1414bd4e91db6918c85ace0c873a5 ]

An interrupt handler that uses the fpu can kill a KVM VM, if it runs
under the following conditions:
 - the guest's xcr0 register is loaded on the cpu
 - the guest's fpu context is not loaded
 - the host is using eagerfpu

Note that the guest's xcr0 register and fpu context are not loaded as
part of the atomic world switch into "guest mode". They are loaded by
KVM while the cpu is still in "host mode".

Usage of the fpu in interrupt context is gated by irq_fpu_usable(). The
interrupt handler will look something like this:

if (irq_fpu_usable()) {
        kernel_fpu_begin();

        [... code that uses the fpu ...]

        kernel_fpu_end();
}

As long as the guest's fpu is not loaded and the host is using eager
fpu, irq_fpu_usable() returns true (interrupted_kernel_fpu_idle()
returns true). The interrupt handler proceeds to use the fpu with
the guest's xcr0 live.

kernel_fpu_begin() saves the current fpu context. If this uses
XSAVE[OPT], it may leave the xsave area in an undesirable state.
According to the SDM, during XSAVE bit i of XSTATE_BV is not modified
if bit i is 0 in xcr0. So it's possible that XSTATE_BV[i] == 1 and
xcr0[i] == 0 following an XSAVE.

kernel_fpu_end() restores the fpu context. Now if any bit i in
XSTATE_BV == 1 while xcr0[i] == 0, XRSTOR generates a #GP. The
fault is trapped and SIGSEGV is delivered to the current process.

Only pre-4.2 kernels appear to be vulnerable to this sequence of
events. Commit 653f52c ("kvm,x86: load guest FPU context more eagerly")
from 4.2 forces the guest's fpu to always be loaded on eagerfpu hosts.

This patch fixes the bug by keeping the host's xcr0 loaded outside
of the interrupts-disabled region where KVM switches into guest mode.

Cc: stable@vger.kernel.org
Suggested-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: David Matlack <dmatlack@google.com>
[Move load after goto cancel_injection. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoparisc: Unbreak handling exceptions from kernel modules
Helge Deller [Fri, 8 Apr 2016 16:32:52 +0000 (18:32 +0200)]
parisc: Unbreak handling exceptions from kernel modules

[ Upstream commit 2ef4dfd9d9f288943e249b78365a69e3ea3ec072 ]

Handling exceptions from modules never worked on parisc.
It was just masked by the fact that exceptions from modules
don't happen during normal use.

When a module triggers an exception in get_user() we need to load the
main kernel dp value before accessing the exception_data structure, and
afterwards restore the original dp value of the module on exit.

Noticed-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoparisc: percpu: update comments referring to __get_cpu_var
Christoph Lameter [Sat, 13 Dec 2014 00:58:47 +0000 (16:58 -0800)]
parisc: percpu: update comments referring to __get_cpu_var

[ Upstream commit 6ddb798f0248e3460c2dce76af5cb30a980efccd ]

__get_cpu_var was removed. Update comments to refer to
this_cpu_ptr() instead.

Signed-off-by: Christoph Lameter <cl@linux.com>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoparisc: Fix kernel crash with reversed copy_from_user()
Helge Deller [Fri, 8 Apr 2016 16:18:48 +0000 (18:18 +0200)]
parisc: Fix kernel crash with reversed copy_from_user()

[ Upstream commit ef72f3110d8b19f4c098a0bff7ed7d11945e70c6 ]

The kernel module testcase (lib/test_user_copy.c) exhibited a kernel
crash on parisc if the parameters for copy_from_user were reversed
("illegal reversed copy_to_user" testcase).

Fix this potential crash by checking the fault handler if the faulting
address is in the exception table.

Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org
Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoparisc: Avoid function pointers for kernel exception routines
Helge Deller [Fri, 8 Apr 2016 16:11:33 +0000 (18:11 +0200)]
parisc: Avoid function pointers for kernel exception routines

[ Upstream commit e3893027a300927049efc1572f852201eb785142 ]

We want to avoid the kernel module loader to create function pointers
for the kernel fixup routines of get_user() and put_user(). Changing
the external reference from function type to int type fixes this.

This unbreaks exception handling for get_user() and put_user() when
called from a kernel module.

Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agogpio: pca953x: Use correct u16 value for register word write
Yong Li [Wed, 30 Mar 2016 06:49:14 +0000 (14:49 +0800)]
gpio: pca953x: Use correct u16 value for register word write

[ Upstream commit 9b8e3ec34318663affced3c14d960e78d760dd9a ]

The current implementation only uses the first byte in val,
the second byte is always 0. Change it to use cpu_to_le16
to write the two bytes into the register

Cc: stable@vger.kernel.org
Signed-off-by: Yong Li <sdliyong@gmail.com>
Reviewed-by: Phil Reid <preid@electromag.com.au>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoUSB: option: add "D-Link DWM-221 B1" device id
Bjørn Mork [Thu, 7 Apr 2016 10:09:17 +0000 (12:09 +0200)]
USB: option: add "D-Link DWM-221 B1" device id

[ Upstream commit d48d5691ebf88a15d95ba96486917ffc79256536 ]

Thomas reports:
"Windows:

00 diagnostics
01 modem
02 at-port
03 nmea
04 nic

Linux:

T:  Bus=02 Lev=01 Prnt=01 Port=03 Cnt=01 Dev#=  4 Spd=480 MxCh= 0
D:  Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs=  1
P:  Vendor=2001 ProdID=7e19 Rev=02.32
S:  Manufacturer=Mobile Connect
S:  Product=Mobile Connect
S:  SerialNumber=0123456789ABCDEF
C:  #Ifs= 6 Cfg#= 1 Atr=a0 MxPwr=500mA
I:  If#= 0 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option
I:  If#= 1 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option
I:  If#= 2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option
I:  If#= 3 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option
I:  If#= 4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=qmi_wwan
I:  If#= 5 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage"

Reported-by: Thomas Schäfer <tschaefer@t-online.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoUSB: serial: cp210x: Adding GE Healthcare Device ID
Martyn Welch [Tue, 29 Mar 2016 16:47:29 +0000 (17:47 +0100)]
USB: serial: cp210x: Adding GE Healthcare Device ID

[ Upstream commit cddc9434e3dcc37a85c4412fb8e277d3a582e456 ]

The CP2105 is used in the GE Healthcare Remote Alarm Box, with the
Manufacturer ID of 0x1901 and Product ID of 0x0194.

Signed-off-by: Martyn Welch <martyn.welch@collabora.co.uk>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoUSB: serial: ftdi_sio: Add support for ICP DAS I-756xU devices
Josh Boyer [Thu, 10 Mar 2016 14:48:52 +0000 (09:48 -0500)]
USB: serial: ftdi_sio: Add support for ICP DAS I-756xU devices

[ Upstream commit ea6db90e750328068837bed34cb1302b7a177339 ]

A Fedora user reports that the ftdi_sio driver works properly for the
ICP DAS I-7561U device.  Further, the user manual for these devices
instructs users to load the driver and add the ids using the sysfs
interface.

Add support for these in the driver directly so that the devices work
out of the box instead of needing manual configuration.

Reported-by: <thesource@mail.ru>
CC: stable <stable@vger.kernel.org>
Signed-off-by: Josh Boyer <jwboyer@fedoraproject.org>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoBtrfs: fix file/data loss caused by fsync after rename and new inode
Filipe Manana [Wed, 30 Mar 2016 22:37:21 +0000 (23:37 +0100)]
Btrfs: fix file/data loss caused by fsync after rename and new inode

[ Upstream commit 56f23fdbb600e6087db7b009775b95ce07cc3195 ]

If we rename an inode A (be it a file or a directory), create a new
inode B with the old name of inode A and under the same parent directory,
fsync inode B and then power fail, at log tree replay time we end up
removing inode A completely. If inode A is a directory then all its files
are gone too.

Example scenarios where this happens:
This is reproducible with the following steps, taken from a couple of
test cases written for fstests which are going to be submitted upstream
soon:

   # Scenario 1

   mkfs.btrfs -f /dev/sdc
   mount /dev/sdc /mnt
   mkdir -p /mnt/a/x
   echo "hello" > /mnt/a/x/foo
   echo "world" > /mnt/a/x/bar
   sync
   mv /mnt/a/x /mnt/a/y
   mkdir /mnt/a/x
   xfs_io -c fsync /mnt/a/x
   <power failure happens>

   The next time the fs is mounted, log tree replay happens and
   the directory "y" does not exist nor do the files "foo" and
   "bar" exist anywhere (neither in "y" nor in "x", nor the root
   nor anywhere).

   # Scenario 2

   mkfs.btrfs -f /dev/sdc
   mount /dev/sdc /mnt
   mkdir /mnt/a
   echo "hello" > /mnt/a/foo
   sync
   mv /mnt/a/foo /mnt/a/bar
   echo "world" > /mnt/a/foo
   xfs_io -c fsync /mnt/a/foo
   <power failure happens>

   The next time the fs is mounted, log tree replay happens and the
   file "bar" does not exists anymore. A file with the name "foo"
   exists and it matches the second file we created.

Another related problem that does not involve file/data loss is when a
new inode is created with the name of a deleted snapshot and we fsync it:

   mkfs.btrfs -f /dev/sdc
   mount /dev/sdc /mnt
   mkdir /mnt/testdir
   btrfs subvolume snapshot /mnt /mnt/testdir/snap
   btrfs subvolume delete /mnt/testdir/snap
   rmdir /mnt/testdir
   mkdir /mnt/testdir
   xfs_io -c fsync /mnt/testdir # or fsync some file inside /mnt/testdir
   <power failure>

   The next time the fs is mounted the log replay procedure fails because
   it attempts to delete the snapshot entry (which has dir item key type
   of BTRFS_ROOT_ITEM_KEY) as if it were a regular (non-root) entry,
   resulting in the following error that causes mount to fail:

   [52174.510532] BTRFS info (device dm-0): failed to delete reference to snap, inode 257 parent 257
   [52174.512570] ------------[ cut here ]------------
   [52174.513278] WARNING: CPU: 12 PID: 28024 at fs/btrfs/inode.c:3986 __btrfs_unlink_inode+0x178/0x351 [btrfs]()
   [52174.514681] BTRFS: Transaction aborted (error -2)
   [52174.515630] Modules linked in: btrfs dm_flakey dm_mod overlay crc32c_generic ppdev xor raid6_pq acpi_cpufreq parport_pc tpm_tis sg parport tpm evdev i2c_piix4 proc
   [52174.521568] CPU: 12 PID: 28024 Comm: mount Tainted: G        W       4.5.0-rc6-btrfs-next-27+ #1
   [52174.522805] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS by qemu-project.org 04/01/2014
   [52174.524053]  0000000000000000 ffff8801df2a7710 ffffffff81264e93 ffff8801df2a7758
   [52174.524053]  0000000000000009 ffff8801df2a7748 ffffffff81051618 ffffffffa03591cd
   [52174.524053]  00000000fffffffe ffff88015e6e5000 ffff88016dbc3c88 ffff88016dbc3c88
   [52174.524053] Call Trace:
   [52174.524053]  [<ffffffff81264e93>] dump_stack+0x67/0x90
   [52174.524053]  [<ffffffff81051618>] warn_slowpath_common+0x99/0xb2
   [52174.524053]  [<ffffffffa03591cd>] ? __btrfs_unlink_inode+0x178/0x351 [btrfs]
   [52174.524053]  [<ffffffff81051679>] warn_slowpath_fmt+0x48/0x50
   [52174.524053]  [<ffffffffa03591cd>] __btrfs_unlink_inode+0x178/0x351 [btrfs]
   [52174.524053]  [<ffffffff8118f5e9>] ? iput+0xb0/0x284
   [52174.524053]  [<ffffffffa0359fe8>] btrfs_unlink_inode+0x1c/0x3d [btrfs]
   [52174.524053]  [<ffffffffa038631e>] check_item_in_log+0x1fe/0x29b [btrfs]
   [52174.524053]  [<ffffffffa0386522>] replay_dir_deletes+0x167/0x1cf [btrfs]
   [52174.524053]  [<ffffffffa038739e>] fixup_inode_link_count+0x289/0x2aa [btrfs]
   [52174.524053]  [<ffffffffa038748a>] fixup_inode_link_counts+0xcb/0x105 [btrfs]
   [52174.524053]  [<ffffffffa038a5ec>] btrfs_recover_log_trees+0x258/0x32c [btrfs]
   [52174.524053]  [<ffffffffa03885b2>] ? replay_one_extent+0x511/0x511 [btrfs]
   [52174.524053]  [<ffffffffa034f288>] open_ctree+0x1dd4/0x21b9 [btrfs]
   [52174.524053]  [<ffffffffa032b753>] btrfs_mount+0x97e/0xaed [btrfs]
   [52174.524053]  [<ffffffff8108e1b7>] ? trace_hardirqs_on+0xd/0xf
   [52174.524053]  [<ffffffff8117bafa>] mount_fs+0x67/0x131
   [52174.524053]  [<ffffffff81193003>] vfs_kern_mount+0x6c/0xde
   [52174.524053]  [<ffffffffa032af81>] btrfs_mount+0x1ac/0xaed [btrfs]
   [52174.524053]  [<ffffffff8108e1b7>] ? trace_hardirqs_on+0xd/0xf
   [52174.524053]  [<ffffffff8108c262>] ? lockdep_init_map+0xb9/0x1b3
   [52174.524053]  [<ffffffff8117bafa>] mount_fs+0x67/0x131
   [52174.524053]  [<ffffffff81193003>] vfs_kern_mount+0x6c/0xde
   [52174.524053]  [<ffffffff8119590f>] do_mount+0x8a6/0x9e8
   [52174.524053]  [<ffffffff811358dd>] ? strndup_user+0x3f/0x59
   [52174.524053]  [<ffffffff81195c65>] SyS_mount+0x77/0x9f
   [52174.524053]  [<ffffffff814935d7>] entry_SYSCALL_64_fastpath+0x12/0x6b
   [52174.561288] ---[ end trace 6b53049efb1a3ea6 ]---

Fix this by forcing a transaction commit when such cases happen.
This means we check in the commit root of the subvolume tree if there
was any other inode with the same reference when the inode we are
fsync'ing is a new inode (created in the current transaction).

Test cases for fstests, covering all the scenarios given above, were
submitted upstream for fstests:

  * fstests: generic test for fsync after renaming directory
    https://patchwork.kernel.org/patch/8694281/

  * fstests: generic test for fsync after renaming file
    https://patchwork.kernel.org/patch/8694301/

  * fstests: add btrfs test for fsync after snapshot deletion
    https://patchwork.kernel.org/patch/8670671/

Cc: stable@vger.kernel.org
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoBtrfs: fix fsync after truncate when no_holes feature is enabled
Filipe Manana [Thu, 25 Jun 2015 03:17:46 +0000 (04:17 +0100)]
Btrfs: fix fsync after truncate when no_holes feature is enabled

[ Upstream commit a89ca6f24ffe435edad57de02eaabd37a2c6bff6 ]

When we have the no_holes feature enabled, if a we truncate a file to a
smaller size, truncate it again but to a size greater than or equals to
its original size and fsync it, the log tree will not have any information
about the hole covering the range [truncate_1_offset, new_file_size[.
Which means if the fsync log is replayed, the file will remain with the
state it had before both truncate operations.

Without the no_holes feature this does not happen, since when the inode
is logged (full sync flag is set) it will find in the fs/subvol tree a
leaf with a generation matching the current transaction id that has an
explicit extent item representing the hole.

Fix this by adding an explicit extent item representing a hole between
the last extent and the inode's i_size if we are doing a full sync.

The issue is easy to reproduce with the following test case for fstests:

  . ./common/rc
  . ./common/filter
  . ./common/dmflakey

  _need_to_be_root
  _supported_fs generic
  _supported_os Linux
  _require_scratch
  _require_dm_flakey

  # This test was motivated by an issue found in btrfs when the btrfs
  # no-holes feature is enabled (introduced in kernel 3.14). So enable
  # the feature if the fs being tested is btrfs.
  if [ $FSTYP == "btrfs" ]; then
      _require_btrfs_fs_feature "no_holes"
      _require_btrfs_mkfs_feature "no-holes"
      MKFS_OPTIONS="$MKFS_OPTIONS -O no-holes"
  fi

  rm -f $seqres.full

  _scratch_mkfs >>$seqres.full 2>&1
  _init_flakey
  _mount_flakey

  # Create our test files and make sure everything is durably persisted.
  $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 64K"         \
                  -c "pwrite -S 0xbb 64K 61K"       \
                  $SCRATCH_MNT/foo | _filter_xfs_io
  $XFS_IO_PROG -f -c "pwrite -S 0xee 0 64K"         \
                  -c "pwrite -S 0xff 64K 61K"       \
                  $SCRATCH_MNT/bar | _filter_xfs_io
  sync

  # Now truncate our file foo to a smaller size (64Kb) and then truncate
  # it to the size it had before the shrinking truncate (125Kb). Then
  # fsync our file. If a power failure happens after the fsync, we expect
  # our file to have a size of 125Kb, with the first 64Kb of data having
  # the value 0xaa and the second 61Kb of data having the value 0x00.
  $XFS_IO_PROG -c "truncate 64K" \
               -c "truncate 125K" \
               -c "fsync" \
               $SCRATCH_MNT/foo

  # Do something similar to our file bar, but the first truncation sets
  # the file size to 0 and the second truncation expands the size to the
  # double of what it was initially.
  $XFS_IO_PROG -c "truncate 0" \
               -c "truncate 253K" \
               -c "fsync" \
               $SCRATCH_MNT/bar

  _load_flakey_table $FLAKEY_DROP_WRITES
  _unmount_flakey

  # Allow writes again, mount to trigger log replay and validate file
  # contents.
  _load_flakey_table $FLAKEY_ALLOW_WRITES
  _mount_flakey

  # We expect foo to have a size of 125Kb, the first 64Kb of data all
  # having the value 0xaa and the remaining 61Kb to be a hole (all bytes
  # with value 0x00).
  echo "File foo content after log replay:"
  od -t x1 $SCRATCH_MNT/foo

  # We expect bar to have a size of 253Kb and no extents (any byte read
  # from bar has the value 0x00).
  echo "File bar content after log replay:"
  od -t x1 $SCRATCH_MNT/bar

  status=0
  exit

The expected file contents in the golden output are:

  File foo content after log replay:
  0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
  *
  0200000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  *
  0372000
  File bar content after log replay:
  0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  *
  0772000

Without this fix, their contents are:

  File foo content after log replay:
  0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
  *
  0200000 bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb
  *
  0372000
  File bar content after log replay:
  0000000 ee ee ee ee ee ee ee ee ee ee ee ee ee ee ee ee
  *
  0200000 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
  *
  0372000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  *
  0772000

A test case submission for fstests follows soon.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoBtrfs: fix fsync xattr loss in the fast fsync path
Filipe Manana [Fri, 19 Jun 2015 23:44:51 +0000 (00:44 +0100)]
Btrfs: fix fsync xattr loss in the fast fsync path

[ Upstream commit 36283bf777d963fac099213297e155d071096994 ]

After commit 4f764e515361 ("Btrfs: remove deleted xattrs on fsync log
replay"), we can end up in a situation where during log replay we end up
deleting xattrs that were never deleted when their file was last fsynced.

This happens in the fast fsync path (flag BTRFS_INODE_NEEDS_FULL_SYNC is
not set in the inode) if the inode has the flag BTRFS_INODE_COPY_EVERYTHING
set, the xattr was added in a past transaction and the leaf where the
xattr is located was not updated (COWed or created) in the current
transaction. In this scenario the xattr item never ends up in the log
tree and therefore at log replay time, which makes the replay code delete
the xattr from the fs/subvol tree as it thinks that xattr was deleted
prior to the last fsync.

Fix this by always logging all xattrs, which is the simplest and most
reliable way to detect deleted xattrs and replay the deletes at log replay
time.

This issue is reproducible with the following test case for fstests:

  seq=`basename $0`
  seqres=$RESULT_DIR/$seq
  echo "QA output created by $seq"

  here=`pwd`
  tmp=/tmp/$$
  status=1 # failure is the default!

  _cleanup()
  {
      _cleanup_flakey
      rm -f $tmp.*
  }
  trap "_cleanup; exit \$status" 0 1 2 3 15

  # get standard environment, filters and checks
  . ./common/rc
  . ./common/filter
  . ./common/dmflakey
  . ./common/attr

  # real QA test starts here

  # We create a lot of xattrs for a single file. Only btrfs and xfs are currently
  # able to store such a large mount of xattrs per file, other filesystems such
  # as ext3/4 and f2fs for example, fail with ENOSPC even if we attempt to add
  # less than 1000 xattrs with very small values.
  _supported_fs btrfs xfs
  _supported_os Linux
  _need_to_be_root
  _require_scratch
  _require_dm_flakey
  _require_attrs
  _require_metadata_journaling $SCRATCH_DEV

  rm -f $seqres.full

  _scratch_mkfs >> $seqres.full 2>&1
  _init_flakey
  _mount_flakey

  # Create the test file with some initial data and make sure everything is
  # durably persisted.
  $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 32k" $SCRATCH_MNT/foo | _filter_xfs_io
  sync

  # Add many small xattrs to our file.
  # We create such a large amount because it's needed to trigger the issue found
  # in btrfs - we need to have an amount that causes the fs to have at least 3
  # btree leafs with xattrs stored in them, and it must work on any leaf size
  # (maximum leaf/node size is 64Kb).
  num_xattrs=2000
  for ((i = 1; i <= $num_xattrs; i++)); do
      name="user.attr_$(printf "%04d" $i)"
      $SETFATTR_PROG -n $name -v "val_$(printf "%04d" $i)" $SCRATCH_MNT/foo
  done

  # Sync the filesystem to force a commit of the current btrfs transaction, this
  # is a necessary condition to trigger the bug on btrfs.
  sync

  # Now update our file's data and fsync the file.
  # After a successful fsync, if the fsync log/journal is replayed we expect to
  # see all the xattrs we added before with the same values (and the updated file
  # data of course). Btrfs used to delete some of these xattrs when it replayed
  # its fsync log/journal.
  $XFS_IO_PROG -c "pwrite -S 0xbb 8K 16K" \
               -c "fsync" \
               $SCRATCH_MNT/foo | _filter_xfs_io

  # Simulate a crash/power loss.
  _load_flakey_table $FLAKEY_DROP_WRITES
  _unmount_flakey

  # Allow writes again and mount. This makes the fs replay its fsync log.
  _load_flakey_table $FLAKEY_ALLOW_WRITES
  _mount_flakey

  echo "File content after crash and log replay:"
  od -t x1 $SCRATCH_MNT/foo

  echo "File xattrs after crash and log replay:"
  for ((i = 1; i <= $num_xattrs; i++)); do
      name="user.attr_$(printf "%04d" $i)"
      echo -n "$name="
      $GETFATTR_PROG --absolute-names -n $name --only-values $SCRATCH_MNT/foo
      echo
  done

  status=0
  exit

The golden output expects all xattrs to be available, and with the correct
values, after the fsync log is replayed.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoBtrfs: fix fsync data loss after append write
Filipe Manana [Wed, 17 Jun 2015 11:49:23 +0000 (12:49 +0100)]
Btrfs: fix fsync data loss after append write

[ Upstream commit e4545de5b035c7debb73d260c78377dbb69cbfb5 ]

If we do an append write to a file (which increases its inode's i_size)
that does not have the flag BTRFS_INODE_NEEDS_FULL_SYNC set in its inode,
and the previous transaction added a new hard link to the file, which sets
the flag BTRFS_INODE_COPY_EVERYTHING in the file's inode, and then fsync
the file, the inode's new i_size isn't logged. This has the consequence
that after the fsync log is replayed, the file size remains what it was
before the append write operation, which means users/applications will
not be able to read the data that was successsfully fsync'ed before.

This happens because neither the inode item nor the delayed inode get
their i_size updated when the append write is made - doing so would
require starting a transaction in the buffered write path, something that
we do not do intentionally for performance reasons.

Fix this by making sure that when the flag BTRFS_INODE_COPY_EVERYTHING is
set the inode is logged with its current i_size (log the in-memory inode
into the log tree).

This issue is not a recent regression and is easy to reproduce with the
following test case for fstests:

  seq=`basename $0`
  seqres=$RESULT_DIR/$seq
  echo "QA output created by $seq"

  here=`pwd`
  tmp=/tmp/$$
  status=1 # failure is the default!

  _cleanup()
  {
          _cleanup_flakey
          rm -f $tmp.*
  }
  trap "_cleanup; exit \$status" 0 1 2 3 15

  # get standard environment, filters and checks
  . ./common/rc
  . ./common/filter
  . ./common/dmflakey

  # real QA test starts here
  _supported_fs generic
  _supported_os Linux
  _need_to_be_root
  _require_scratch
  _require_dm_flakey
  _require_metadata_journaling $SCRATCH_DEV

  _crash_and_mount()
  {
          # Simulate a crash/power loss.
          _load_flakey_table $FLAKEY_DROP_WRITES
          _unmount_flakey
          # Allow writes again and mount. This makes the fs replay its fsync log.
          _load_flakey_table $FLAKEY_ALLOW_WRITES
          _mount_flakey
  }

  rm -f $seqres.full

  _scratch_mkfs >> $seqres.full 2>&1
  _init_flakey
  _mount_flakey

  # Create the test file with some initial data and then fsync it.
  # The fsync here is only needed to trigger the issue in btrfs, as it causes the
  # the flag BTRFS_INODE_NEEDS_FULL_SYNC to be removed from the btrfs inode.
  $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 32k" \
                  -c "fsync" \
                  $SCRATCH_MNT/foo | _filter_xfs_io
  sync

  # Add a hard link to our file.
  # On btrfs this sets the flag BTRFS_INODE_COPY_EVERYTHING on the btrfs inode,
  # which is a necessary condition to trigger the issue.
  ln $SCRATCH_MNT/foo $SCRATCH_MNT/bar

  # Sync the filesystem to force a commit of the current btrfs transaction, this
  # is a necessary condition to trigger the bug on btrfs.
  sync

  # Now append more data to our file, increasing its size, and fsync the file.
  # In btrfs because the inode flag BTRFS_INODE_COPY_EVERYTHING was set and the
  # write path did not update the inode item in the btree nor the delayed inode
  # item (in memory struture) in the current transaction (created by the fsync
  # handler), the fsync did not record the inode's new i_size in the fsync
  # log/journal. This made the data unavailable after the fsync log/journal is
  # replayed.
  $XFS_IO_PROG -c "pwrite -S 0xbb 32K 32K" \
               -c "fsync" \
               $SCRATCH_MNT/foo | _filter_xfs_io

  echo "File content after fsync and before crash:"
  od -t x1 $SCRATCH_MNT/foo

  _crash_and_mount

  echo "File content after crash and log replay:"
  od -t x1 $SCRATCH_MNT/foo

  status=0
  exit

The expected file output before and after the crash/power failure expects the
appended data to be available, which is:

  0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
  *
  0100000 bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb
  *
  0200000

Cc: stable@vger.kernel.org
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoassoc_array: don't call compare_object() on a node
Jerome Marchand [Wed, 6 Apr 2016 13:06:48 +0000 (14:06 +0100)]
assoc_array: don't call compare_object() on a node

[ Upstream commit 8d4a2ec1e0b41b0cf9a0c5cd4511da7f8e4f3de2 ]

Changes since V1: fixed the description and added KASan warning.

In assoc_array_insert_into_terminal_node(), we call the
compare_object() method on all non-empty slots, even when they're
not leaves, passing a pointer to an unexpected structure to
compare_object(). Currently it causes an out-of-bound read access
in keyring_compare_object detected by KASan (see below). The issue
is easily reproduced with keyutils testsuite.
Only call compare_object() when the slot is a leave.

KASan warning:
==================================================================
BUG: KASAN: slab-out-of-bounds in keyring_compare_object+0x213/0x240 at addr ffff880060a6f838
Read of size 8 by task keyctl/1655
=============================================================================
BUG kmalloc-192 (Not tainted): kasan: bad access detected
-----------------------------------------------------------------------------

Disabling lock debugging due to kernel taint
INFO: Allocated in assoc_array_insert+0xfd0/0x3a60 age=69 cpu=1 pid=1647
___slab_alloc+0x563/0x5c0
__slab_alloc+0x51/0x90
kmem_cache_alloc_trace+0x263/0x300
assoc_array_insert+0xfd0/0x3a60
__key_link_begin+0xfc/0x270
key_create_or_update+0x459/0xaf0
SyS_add_key+0x1ba/0x350
entry_SYSCALL_64_fastpath+0x12/0x76
INFO: Slab 0xffffea0001829b80 objects=16 used=8 fp=0xffff880060a6f550 flags=0x3fff8000004080
INFO: Object 0xffff880060a6f740 @offset=5952 fp=0xffff880060a6e5d1

Bytes b4 ffff880060a6f730: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f740: d1 e5 a6 60 00 88 ff ff 0e 00 00 00 00 00 00 00  ...`............
Object ffff880060a6f750: 02 cf 8e 60 00 88 ff ff 02 c0 8e 60 00 88 ff ff  ...`.......`....
Object ffff880060a6f760: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f770: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f780: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f790: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f7a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f7b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f7c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f7d0: 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f7e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f7f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
CPU: 0 PID: 1655 Comm: keyctl Tainted: G    B           4.5.0-rc4-kasan+ #291
Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
 0000000000000000 000000001b2800b4 ffff880060a179e0 ffffffff81b60491
 ffff88006c802900 ffff880060a6f740 ffff880060a17a10 ffffffff815e2969
 ffff88006c802900 ffffea0001829b80 ffff880060a6f740 ffff880060a6e650
Call Trace:
 [<ffffffff81b60491>] dump_stack+0x85/0xc4
 [<ffffffff815e2969>] print_trailer+0xf9/0x150
 [<ffffffff815e9454>] object_err+0x34/0x40
 [<ffffffff815ebe50>] kasan_report_error+0x230/0x550
 [<ffffffff819949be>] ? keyring_get_key_chunk+0x13e/0x210
 [<ffffffff815ec62d>] __asan_report_load_n_noabort+0x5d/0x70
 [<ffffffff81994cc3>] ? keyring_compare_object+0x213/0x240
 [<ffffffff81994cc3>] keyring_compare_object+0x213/0x240
 [<ffffffff81bc238c>] assoc_array_insert+0x86c/0x3a60
 [<ffffffff81bc1b20>] ? assoc_array_cancel_edit+0x70/0x70
 [<ffffffff8199797d>] ? __key_link_begin+0x20d/0x270
 [<ffffffff8199786c>] __key_link_begin+0xfc/0x270
 [<ffffffff81993389>] key_create_or_update+0x459/0xaf0
 [<ffffffff8128ce0d>] ? trace_hardirqs_on+0xd/0x10
 [<ffffffff81992f30>] ? key_type_lookup+0xc0/0xc0
 [<ffffffff8199e19d>] ? lookup_user_key+0x13d/0xcd0
 [<ffffffff81534763>] ? memdup_user+0x53/0x80
 [<ffffffff819983ea>] SyS_add_key+0x1ba/0x350
 [<ffffffff81998230>] ? key_get_type_from_user.constprop.6+0xa0/0xa0
 [<ffffffff828bcf4e>] ? retint_user+0x18/0x23
 [<ffffffff8128cc7e>] ? trace_hardirqs_on_caller+0x3fe/0x580
 [<ffffffff81004017>] ? trace_hardirqs_on_thunk+0x17/0x19
 [<ffffffff828bc432>] entry_SYSCALL_64_fastpath+0x12/0x76
Memory state around the buggy address:
 ffff880060a6f700: fc fc fc fc fc fc fc fc 00 00 00 00 00 00 00 00
 ffff880060a6f780: 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc fc
>ffff880060a6f800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
                                        ^
 ffff880060a6f880: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
 ffff880060a6f900: fc fc fc fc fc fc 00 00 00 00 00 00 00 00 00 00
==================================================================

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoALSA: usb-audio: Add a quirk for Plantronics BT300
Dennis Kadioglu [Wed, 6 Apr 2016 06:39:01 +0000 (08:39 +0200)]
ALSA: usb-audio: Add a quirk for Plantronics BT300

[ Upstream commit b4203ff5464da00b7812e7b480192745b0d66bbf ]

Plantronics BT300 does not support reading the sample rate which leads
to many lines of "cannot get freq at ep 0x1". This patch adds the USB
ID of the BT300 to quirks.c and avoids those error messages.

Signed-off-by: Dennis Kadioglu <denk@post.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agorbd: use GFP_NOIO consistently for request allocations
David Disseldorp [Tue, 5 Apr 2016 09:13:39 +0000 (11:13 +0200)]
rbd: use GFP_NOIO consistently for request allocations

[ Upstream commit 2224d879c7c0f85c14183ef82eb48bd875ceb599 ]

As of 5a60e87603c4c533492c515b7f62578189b03c9c, RBD object request
allocations are made via rbd_obj_request_create() with GFP_NOIO.
However, subsequent OSD request allocations in rbd_osd_req_create*()
use GFP_ATOMIC.

With heavy page cache usage (e.g. OSDs running on same host as krbd
client), rbd_osd_req_create() order-1 GFP_ATOMIC allocations have been
observed to fail, where direct reclaim would have allowed GFP_NOIO
allocations to succeed.

Cc: stable@vger.kernel.org # 3.18+
Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Suggested-by: Neil Brown <neilb@suse.com>
Signed-off-by: David Disseldorp <ddiss@suse.de>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agocompiler-gcc: disable -ftracer for __noclone functions
Paolo Bonzini [Thu, 31 Mar 2016 07:38:51 +0000 (09:38 +0200)]
compiler-gcc: disable -ftracer for __noclone functions

[ Upstream commit 95272c29378ee7dc15f43fa2758cb28a5913a06d ]

-ftracer can duplicate asm blocks causing compilation to fail in
noclone functions.  For example, KVM declares a global variable
in an asm like

    asm("2: ... \n
         .pushsection data \n
         .global vmx_return \n
         vmx_return: .long 2b");

and -ftracer causes a double declaration.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: stable@vger.kernel.org
Cc: kvm@vger.kernel.org
Reported-by: Linda Walsh <lkml@tlinx.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agocompiler-gcc: integrate the various compiler-gcc[345].h files
Joe Perches [Thu, 25 Jun 2015 22:01:02 +0000 (15:01 -0700)]
compiler-gcc: integrate the various compiler-gcc[345].h files

[ Upstream commit f320793e52aee78f0fbb8bcaf10e6614d2e67bfc ]

[ Upstream commit cb984d101b30eb7478d32df56a0023e4603cba7f ]

As gcc major version numbers are going to advance rather rapidly in the
future, there's no real value in separate files for each compiler
version.

Deduplicate some of the macros #defined in each file too.

Neaten comments using normal kernel commenting style.

Signed-off-by: Joe Perches <joe@perches.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Segher Boessenkool <segher@kernel.crashing.org>
Cc: Sasha Levin <levinsasha928@gmail.com>
Cc: Anton Blanchard <anton@samba.org>
Cc: Alan Modra <amodra@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agousb: renesas_usbhs: fix to avoid using a disabled ep in usbhsg_queue_done()
Yoshihiro Shimoda [Mon, 4 Apr 2016 11:40:20 +0000 (20:40 +0900)]
usb: renesas_usbhs: fix to avoid using a disabled ep in usbhsg_queue_done()

[ Upstream commit 4fccb0767fdbdb781a9c5b5c15ee7b219443c89d ]

This patch fixes an issue that usbhsg_queue_done() may cause kernel
panic when dma callback is running and usb_ep_disable() is called
by interrupt handler. (Especially, we can reproduce this issue using
g_audio with usb-dmac driver.)

For example of a flow:
 usbhsf_dma_complete (on tasklet)
  --> usbhsf_pkt_handler (on tasklet)
   --> usbhsg_queue_done (on tasklet)
    *** interrupt happened and usb_ep_disable() is called ***
    --> usbhsg_queue_pop (on tasklet)
     Then, oops happened.

Fixes: e73a989 ("usb: renesas_usbhs: add DMAEngine support")
Cc: <stable@vger.kernel.org> # v3.1+
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agousb: renesas_usbhs: fix spinlock suspected in a gadget complete function
Yoshihiro Shimoda [Mon, 16 Mar 2015 07:36:48 +0000 (16:36 +0900)]
usb: renesas_usbhs: fix spinlock suspected in a gadget complete function

[ Upstream commit 00f30d29b497577954b20237b405e9d22b5286c2 ]

According to the gadget.h, a "complete" function will always be called
with interrupts disabled. However, sometimes usbhsg_queue_pop() function
is called with interrupts enabled. So, this function should be held by
usbhs_lock() to disable interruption. Also, this driver has to call
spin_unlock() to avoid spinlock recursion by this driver before calling
usb_gadget_giveback_request().
Otherwise, there is possible to cause a spinlock suspected in a gadget
complete function.

Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoxen/events: Mask a moving irq
Boris Ostrovsky [Fri, 18 Mar 2016 14:11:07 +0000 (10:11 -0400)]
xen/events: Mask a moving irq

[ Upstream commit ff1e22e7a638a0782f54f81a6c9cb139aca2da35 ]

Moving an unmasked irq may result in irq handler being invoked on both
source and target CPUs.

With 2-level this can happen as follows:

On source CPU:
        evtchn_2l_handle_events() ->
            generic_handle_irq() ->
                handle_edge_irq() ->
                   eoi_pirq():
                       irq_move_irq(data);

                       /***** WE ARE HERE *****/

                       if (VALID_EVTCHN(evtchn))
                           clear_evtchn(evtchn);

If at this moment target processor is handling an unrelated event in
evtchn_2l_handle_events()'s loop it may pick up our event since target's
cpu_evtchn_mask claims that this event belongs to it *and* the event is
unmasked and still pending. At the same time, source CPU will continue
executing its own handle_edge_irq().

With FIFO interrupt the scenario is similar: irq_move_irq() may result
in a EVTCHNOP_unmask hypercall which, in turn, may make the event
pending on the target CPU.

We can avoid this situation by moving and clearing the event while
keeping event masked.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: stable@vger.kernel.org
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoALSA: usb-audio: Add a sample rate quirk for Phoenix Audio TMX320
Takashi Iwai [Mon, 4 Apr 2016 09:47:50 +0000 (11:47 +0200)]
ALSA: usb-audio: Add a sample rate quirk for Phoenix Audio TMX320

[ Upstream commit f03b24a851d32ca85dacab01785b24a7ee717d37 ]

Phoenix Audio TMX320 gives the similar error when the sample rate is
asked:
  usb 2-1.3: 2:1: cannot get freq at ep 0x85
  usb 2-1.3: 1:1: cannot get freq at ep 0x2
  ....

Add the corresponding USB-device ID (1de7:0014) to
snd_usb_get_sample_rate_quirk() list.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=110221
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoALSA: usb-audio: Add sample rate inquiry quirk for AudioQuest DragonFly
Anssi Hannula [Sun, 13 Dec 2015 18:49:59 +0000 (20:49 +0200)]
ALSA: usb-audio: Add sample rate inquiry quirk for AudioQuest DragonFly

[ Upstream commit 12a6116e66695a728bcb9616416c508ce9c051a1 ]

Avoid getting sample rate on AudioQuest DragonFly as it is unsupported
and causes noisy "cannot get freq at ep 0x1" messages when playback
starts.

Signed-off-by: Anssi Hannula <anssi.hannula@iki.fi>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoALSA: usb-audio: don't try to get Outlaw RR2150 sample rate
Eric Wong [Sat, 30 May 2015 09:15:39 +0000 (09:15 +0000)]
ALSA: usb-audio: don't try to get Outlaw RR2150 sample rate

[ Upstream commit 2f80b2958abe5658000d5ad9b45a36ecf879666e ]

This quirk allows us to avoid the noisy:

current rate 0 is different from the runtime rate

message every time playback starts.  While USB DAC in the RR2150
supports reading the sample rate, it never returns a sample rate
other than zero in my observation with common sample rates.

Signed-off-by: Eric Wong <normalperson@yhbt.net>
Cc: Joe Turner <joe@oampo.co.uk>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoext4: ignore quota mount options if the quota feature is enabled
Theodore Ts'o [Sun, 3 Apr 2016 21:03:37 +0000 (17:03 -0400)]
ext4: ignore quota mount options if the quota feature is enabled

[ Upstream commit c325a67c72903e1cc30e990a15ce745bda0dbfde ]

Previously, ext4 would fail the mount if the file system had the quota
feature enabled and quota mount options (used for the older quota
setups) were present.  This broke xfstests, since xfs silently ignores
the usrquote and grpquota mount options if they are specified.  This
commit changes things so that we are consistent with xfs; having the
mount options specified is harmless, so no sense break users by
forbidding them.

Cc: stable@vger.kernel.org
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoKVM: x86: Inject pending interrupt even if pending nmi exist
Yuki Shibuya [Thu, 24 Mar 2016 05:17:03 +0000 (05:17 +0000)]
KVM: x86: Inject pending interrupt even if pending nmi exist

[ Upstream commit 321c5658c5e9192dea0d58ab67cf1791e45b2b26 ]

Non maskable interrupts (NMI) are preferred to interrupts in current
implementation. If a NMI is pending and NMI is blocked by the result
of nmi_allowed(), pending interrupt is not injected and
enable_irq_window() is not executed, even if interrupts injection is
allowed.

In old kernel (e.g. 2.6.32), schedule() is often called in NMI context.
In this case, interrupts are needed to execute iret that intends end
of NMI. The flag of blocking new NMI is not cleared until the guest
execute the iret, and interrupts are blocked by pending NMI. Due to
this, iret can't be invoked in the guest, and the guest is starved
until block is cleared by some events (e.g. canceling injection).

This patch injects pending interrupts, when it's allowed, even if NMI
is blocked. And, If an interrupts is pending after executing
inject_pending_event(), enable_irq_window() is executed regardless of
NMI pending counter.

Cc: stable@vger.kernel.org
Signed-off-by: Yuki Shibuya <shibuya.yk@ncos.nec.co.jp>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoext4: add lockdep annotations for i_data_sem
Theodore Ts'o [Fri, 1 Apr 2016 05:31:28 +0000 (01:31 -0400)]
ext4: add lockdep annotations for i_data_sem

[ Upstream commit daf647d2dd58cec59570d7698a45b98e580f2076 ]

With the internal Quota feature, mke2fs creates empty quota inodes and
quota usage tracking is enabled as soon as the file system is mounted.
Since quotacheck is no longer preallocating all of the blocks in the
quota inode that are likely needed to be written to, we are now seeing
a lockdep false positive caused by needing to allocate a quota block
from inside ext4_map_blocks(), while holding i_data_sem for a data
inode.  This results in this complaint:

  Possible unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock(&ei->i_data_sem);
                                lock(&s->s_dquot.dqio_mutex);
                                lock(&ei->i_data_sem);
   lock(&s->s_dquot.dqio_mutex);

Google-Bug-Id: 27907753

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agosd: Fix excessive capacity printing on devices with blocks bigger than 512 bytes
Martin K. Petersen [Tue, 29 Mar 2016 01:18:56 +0000 (21:18 -0400)]
sd: Fix excessive capacity printing on devices with blocks bigger than 512 bytes

[ Upstream commit f08bb1e0dbdd0297258d0b8cd4dbfcc057e57b2a ]

During revalidate we check whether device capacity has changed before we
decide whether to output disk information or not.

The check for old capacity failed to take into account that we scaled
sdkp->capacity based on the reported logical block size. And therefore
the capacity test would always fail for devices with sectors bigger than
512 bytes and we would print several copies of the same discovery
information.

Avoid scaling sdkp->capacity and instead adjust the value on the fly
when setting the block device capacity and generating fake C/H/S
geometry.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Cc: <stable@vger.kernel.org>
Reported-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Hannes Reinicke <hare@suse.de>
Reviewed-by: Ewan Milne <emilne@redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoUSB: digi_acceleport: do sanity checking for the number of ports
Oliver Neukum [Thu, 31 Mar 2016 16:04:26 +0000 (12:04 -0400)]
USB: digi_acceleport: do sanity checking for the number of ports

[ Upstream commit 5a07975ad0a36708c6b0a5b9fea1ff811d0b0c1f ]

The driver can be crashed with devices that expose crafted descriptors
with too few endpoints.

See: http://seclists.org/bugtraq/2016/Mar/61

Signed-off-by: Oliver Neukum <ONeukum@suse.com>
[johan: fix OOB endpoint check and add error messages ]
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoUSB: cypress_m8: add endpoint sanity check
Oliver Neukum [Thu, 31 Mar 2016 16:04:25 +0000 (12:04 -0400)]
USB: cypress_m8: add endpoint sanity check

[ Upstream commit c55aee1bf0e6b6feec8b2927b43f7a09a6d5f754 ]

An attack using missing endpoints exists.

CVE-2016-3137

Signed-off-by: Oliver Neukum <ONeukum@suse.com>
CC: stable@vger.kernel.org
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoUSB: mct_u232: add sanity checking in probe
Oliver Neukum [Thu, 31 Mar 2016 16:04:24 +0000 (12:04 -0400)]
USB: mct_u232: add sanity checking in probe

[ Upstream commit 4e9a0b05257f29cf4b75f3209243ed71614d062e ]

An attack using the lack of sanity checking in probe is known. This
patch checks for the existence of a second port.

CVE-2016-3136

Signed-off-by: Oliver Neukum <ONeukum@suse.com>
CC: stable@vger.kernel.org
[johan: add error message ]
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agodrm/qxl: fix cursor position with non-zero hotspot
John Keeping [Wed, 18 Nov 2015 11:17:25 +0000 (11:17 +0000)]
drm/qxl: fix cursor position with non-zero hotspot

[ Upstream commit d59a1f71ff1aeda4b4630df92d3ad4e3b1dfc885 ]

The SPICE protocol considers the position of a cursor to be the location
of its active pixel on the display, so the cursor is drawn with its
top-left corner at "(x - hot_spot_x, y - hot_spot_y)" but the DRM cursor
position gives the location where the top-left corner should be drawn,
with the hotspot being a hint for drivers that need it.

This fixes the location of the window resize cursors when using Fluxbox
with the QXL DRM driver and both the QXL and modesetting X drivers.

Signed-off-by: John Keeping <john@metanate.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: stable@vger.kernel.org
Link: http://patchwork.freedesktop.org/patch/msgid/1447845445-2116-1-git-send-email-john@metanate.com
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agousb: renesas_usbhs: disable TX IRQ before starting TX DMAC transfer
Yoshihiro Shimoda [Thu, 10 Mar 2016 02:30:15 +0000 (11:30 +0900)]
usb: renesas_usbhs: disable TX IRQ before starting TX DMAC transfer

[ Upstream commit 6490865c67825277b29638e839850882600b48ec ]

This patch adds a code to surely disable TX IRQ of the pipe before
starting TX DMAC transfer. Otherwise, a lot of unnecessary TX IRQs
may happen in rare cases when DMAC is used.

Fixes: e73a989 ("usb: renesas_usbhs: add DMAEngine support")
Cc: <stable@vger.kernel.org> # v3.1+
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agousb: renesas_usbhs: avoid NULL pointer derefernce in usbhsf_pkt_handler()
Yoshihiro Shimoda [Thu, 10 Mar 2016 02:30:14 +0000 (11:30 +0900)]
usb: renesas_usbhs: avoid NULL pointer derefernce in usbhsf_pkt_handler()

[ Upstream commit 894f2fc44f2f3f48c36c973b1123f6ab298be160 ]

When unexpected situation happened (e.g. tx/rx irq happened while
DMAC is used), the usbhsf_pkt_handler() was possible to cause NULL
pointer dereference like the followings:

Unable to handle kernel NULL pointer dereference at virtual address 00000000
pgd = c0004000
[00000000] *pgd=00000000
Internal error: Oops: 80000007 [#1] SMP ARM
Modules linked in: usb_f_acm u_serial g_serial libcomposite
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.5.0-rc6-00842-gac57066-dirty #63
Hardware name: Generic R8A7790 (Flattened Device Tree)
task: c0729c00 ti: c0724000 task.ti: c0724000
PC is at 0x0
LR is at usbhsf_pkt_handler+0xac/0x118
pc : [<00000000>]    lr : [<c03257e0>]    psr: 60000193
sp : c0725db8  ip : 00000000  fp : c0725df4
r10: 00000001  r9 : 00000193  r8 : ef3ccab4
r7 : ef3cca10  r6 : eea4586c  r5 : 00000000  r4 : ef19ceb4
r3 : 00000000  r2 : 0000009c  r1 : c0725dc4  r0 : ef19ceb4

This patch adds a condition to avoid the dereference.

Fixes: e73a989 ("usb: renesas_usbhs: add DMAEngine support")
Cc: <stable@vger.kernel.org> # v3.1+
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoARM: OMAP2+: hwmod: Fix updating of sysconfig register
Lokesh Vutla [Sun, 27 Mar 2016 05:08:55 +0000 (23:08 -0600)]
ARM: OMAP2+: hwmod: Fix updating of sysconfig register

[ Upstream commit 3ca4a238106dedc285193ee47f494a6584b6fd2f ]

Commit 127500ccb766f ("ARM: OMAP2+: Only write the sysconfig on idle
when necessary") talks about verification of sysconfig cache value before
updating it, only during idle path. But the patch is adding the
verification in the enable path. So, adding the check in a proper place
as per the commit description.

Not keeping this check during enable path as there is a chance of losing
context and it is safe to do on idle as the context of the register will
never be lost while the device is active.

Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com>
Acked-by: Tero Kristo <t-kristo@ti.com>
Cc: Jon Hunter <jonathanh@nvidia.com>
Cc: <stable@vger.kernel.org> # 3.12+
Fixes: commit 127500ccb766 "ARM: OMAP2+: Only write the sysconfig on idle when necessary"
[paul@pwsan.com: appears to have been caused by my own mismerge of the
 originally posted patch]
Signed-off-by: Paul Walmsley <paul@pwsan.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoHID: usbhid: fix inconsistent reset/resume/reset-resume behavior
Alan Stern [Wed, 23 Mar 2016 16:17:09 +0000 (12:17 -0400)]
HID: usbhid: fix inconsistent reset/resume/reset-resume behavior

[ Upstream commit 972e6a993f278b416a8ee3ec65475724fc36feb2 ]

The usbhid driver has inconsistently duplicated code in its post-reset,
resume, and reset-resume pathways.

reset-resume doesn't check HID_STARTED before trying to
restart the I/O queues.

resume fails to clear the HID_SUSPENDED flag if HID_STARTED
isn't set.

resume calls usbhid_restart_queues() with usbhid->lock held
and the others call it without holding the lock.

The first item in particular causes a problem following a reset-resume
if the driver hasn't started up its I/O.  URB submission fails because
usbhid->urbin is NULL, and this triggers an unending reset-retry loop.

This patch fixes the problem by creating a new subroutine,
hid_restart_io(), to carry out all the common activities.  It also
adds some checks that were missing in the original code:

After a reset, there's no need to clear any halted endpoints.

After a resume, if a reset is pending there's no need to
restart any I/O until the reset is finished.

After a resume, if the interrupt-IN endpoint is halted there's
no need to submit the input URB until the halt has been
cleared.

Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Reported-by: Daniel Fraga <fragabr@gmail.com>
Tested-by: Daniel Fraga <fragabr@gmail.com>
CC: <stable@vger.kernel.org>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoLinux 3.18.31 v3.18.31
Sasha Levin [Tue, 19 Apr 2016 11:57:18 +0000 (07:57 -0400)]
Linux 3.18.31

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agocrypto: algif_skcipher - Fix race condition in skcipher_check_key
Herbert Xu [Fri, 26 Feb 2016 11:44:11 +0000 (12:44 +0100)]
crypto: algif_skcipher - Fix race condition in skcipher_check_key

    commit 1822793a523e5d5730b19cc21160ff1717421bc8 upstream.

    We need to lock the child socket in skcipher_check_key as otherwise
    two simultaneous calls can cause the parent socket to be freed.

Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agocrypto: algif_skcipher - Remove custom release parent function
Herbert Xu [Fri, 26 Feb 2016 11:44:10 +0000 (12:44 +0100)]
crypto: algif_skcipher - Remove custom release parent function

    commit d7b65aee1e7b4c87922b0232eaba56a8a143a4a0 upstream.

    This patch removes the custom release parent function as the
    generic af_alg_release_parent now works for nokey sockets too.

Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agocrypto: algif_skcipher - Add nokey compatibility path
Herbert Xu [Fri, 26 Feb 2016 11:44:09 +0000 (12:44 +0100)]
crypto: algif_skcipher - Add nokey compatibility path

    commit a0fa2d037129a9849918a92d91b79ed6c7bd2818 upstream.

    This patch adds a compatibility path to support old applications
    that do acept(2) before setkey.

Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    [backported to 3.18 by Milan Broz <gmazyland@gmail.com>]

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agocrypto: algif_skcipher - Require setkey before accept(2)
Herbert Xu [Fri, 26 Feb 2016 11:44:08 +0000 (12:44 +0100)]
crypto: algif_skcipher - Require setkey before accept(2)

    commit dd504589577d8e8e70f51f997ad487a4cb6c026f upstream.

    Some cipher implementations will crash if you try to use them
    without calling setkey first.  This patch adds a check so that
    the accept(2) call will fail with -ENOKEY if setkey hasn't been
    done on the socket yet.

Cc: stable@vger.kernel.org
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Dmitry Vyukov <dvyukov@google.com>
    [backported to 3.18 by Milan Broz <gmazyland@gmail.com>]

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoALSA: hda - Fix regression of monitor_present flag in eld proc file
Hyungwon Hwang [Wed, 13 Apr 2016 00:27:39 +0000 (09:27 +0900)]
ALSA: hda - Fix regression of monitor_present flag in eld proc file

[ Upstream commit 023d8218ec0dfc30e11d4ec54f640e8f127d1fbe ]

The commit [bd48128539ab: ALSA: hda - Fix forgotten HDMI
monitor_present update] covered the missing update of monitor_present
flag, but this caused a regression for devices without the i915 eld
notifier.  Since the old code supposed that pin_eld->monitor_present
was updated by the caller side, the hdmi_present_sense_via_verbs()
doesn't update the temporary eld->monitor_present but only
pin_eld->monitor_present, which is now overridden in update_eld().

The fix is to update pin_eld->monitor_present as well before calling
update_eld().

Note that this may still leave monitor_present flag in an inconsistent
state when the driver repolls, but this is at least the old behavior.
More proper fix will follow in the later patch.

Fixes: bd48128539ab ('ALSA: hda - Fix forgotten HDMI monitor_present update')
Signed-off-by: Hyungwon Hwang <hyungwon.hwang7@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoarm64: errata: Add -mpc-relative-literal-loads to build flags
dann frazier [Mon, 25 Jan 2016 23:52:16 +0000 (16:52 -0700)]
arm64: errata: Add -mpc-relative-literal-loads to build flags

[ Upstream commit 67dfa1751ce71e629aad7c438e1678ad41054677 ]

GCC6 (and Linaro's 2015.12 snapshot of GCC5) has a new default that uses
adrp/ldr or adrp/add to address literal pools. When CONFIG_ARM64_ERRATUM_843419
is enabled, modules built with this toolchain fail to load:

  module libahci: unsupported RELA relocation: 275

This patch fixes the problem by passing '-mpc-relative-literal-loads'
to the compiler.

Cc: stable@vger.kernel.org
Fixes: df057cc7b4fa ("arm64: errata: add module build workaround for erratum #843419")
BugLink: http://bugs.launchpad.net/bugs/1533009
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Suggested-by: Christophe Lyon <christophe.lyon@linaro.org>
Signed-off-by: Dann Frazier <dann.frazier@canonical.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agomm/page_alloc: prevent merging between isolated and other pageblocks
Vlastimil Babka [Fri, 25 Mar 2016 21:21:50 +0000 (14:21 -0700)]
mm/page_alloc: prevent merging between isolated and other pageblocks

[ Upstream commit d9dddbf556674bf125ecd925b24e43a5cf2a568a ]

Hanjun Guo has reported that a CMA stress test causes broken accounting of
CMA and free pages:

> Before the test, I got:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal:         204800 kB
> CmaFree:          195044 kB
>
>
> After running the test:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal:         204800 kB
> CmaFree:         6602584 kB
>
> So the freed CMA memory is more than total..
>
> Also the the MemFree is more than mem total:
>
> -bash-4.3# cat /proc/meminfo
> MemTotal:       16342016 kB
> MemFree:        22367268 kB
> MemAvailable:   22370528 kB

Laura Abbott has confirmed the issue and suspected the freepage accounting
rewrite around 3.18/4.0 by Joonsoo Kim.  Joonsoo had a theory that this is
caused by unexpected merging between MIGRATE_ISOLATE and MIGRATE_CMA
pageblocks:

> CMA isolates MAX_ORDER aligned blocks, but, during the process,
> partialy isolated block exists. If MAX_ORDER is 11 and
> pageblock_order is 9, two pageblocks make up MAX_ORDER
> aligned block and I can think following scenario because pageblock
> (un)isolation would be done one by one.
>
> (each character means one pageblock. 'C', 'I' means MIGRATE_CMA,
> MIGRATE_ISOLATE, respectively.
>
> CC -> IC -> II (Isolation)
> II -> CI -> CC (Un-isolation)
>
> If some pages are freed at this intermediate state such as IC or CI,
> that page could be merged to the other page that is resident on
> different type of pageblock and it will cause wrong freepage count.

This was supposed to be prevented by CMA operating on MAX_ORDER blocks,
but since it doesn't hold the zone->lock between pageblocks, a race
window does exist.

It's also likely that unexpected merging can occur between
MIGRATE_ISOLATE and non-CMA pageblocks.  This should be prevented in
__free_one_page() since commit 3c605096d315 ("mm/page_alloc: restrict
max order of merging on isolated pageblock").  However, we only check
the migratetype of the pageblock where buddy merging has been initiated,
not the migratetype of the buddy pageblock (or group of pageblocks)
which can be MIGRATE_ISOLATE.

Joonsoo has suggested checking for buddy migratetype as part of
page_is_buddy(), but that would add extra checks in allocator hotpath
and bloat-o-meter has shown significant code bloat (the function is
inline).

This patch reduces the bloat at some expense of more complicated code.
The buddy-merging while-loop in __free_one_page() is initially bounded
to pageblock_border and without any migratetype checks.  The checks are
placed outside, bumping the max_order if merging is allowed, and
returning to the while-loop with a statement which can't be possibly
considered harmful.

This fixes the accounting bug and also removes the arguably weird state
in the original commit 3c605096d315 where buddies could be left
unmerged.

Fixes: 3c605096d315 ("mm/page_alloc: restrict max order of merging on isolated pageblock")
Link: https://lkml.org/lkml/2016/3/2/280
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reported-by: Hanjun Guo <guohanjun@huawei.com>
Tested-by: Hanjun Guo <guohanjun@huawei.com>
Acked-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Debugged-by: Laura Abbott <labbott@redhat.com>
Debugged-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: <stable@vger.kernel.org> [3.18+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agomm: use 'unsigned int' for page order
Kirill A. Shutemov [Sat, 7 Nov 2015 00:29:57 +0000 (16:29 -0800)]
mm: use 'unsigned int' for page order

[ Upstream commit d00181b96eb86c914cb327d1de974a1b71366e1b ]

Let's try to be consistent about data type of page order.

[sfr@canb.auug.org.au: fix build (type of pageblock_order)]
[hughd@google.com: some configs end up with MAX_ORDER and pageblock_order having different types]
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agomm: page_alloc: pass PFN to __free_pages_bootmem
Mel Gorman [Tue, 30 Jun 2015 21:56:52 +0000 (14:56 -0700)]
mm: page_alloc: pass PFN to __free_pages_bootmem

[ Upstream commit d70ddd7a5d9aa335f9b4b0c3d879e1e70ee1e4e3 ]

__free_pages_bootmem prepares a page for release to the buddy allocator
and assumes that the struct page is initialised.  Parallel initialisation
of struct pages defers initialisation and __free_pages_bootmem can be
called for struct pages that cannot yet map struct page to PFN.  This
patch passes PFN to __free_pages_bootmem with no other functional change.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Tested-by: Nate Zimmer <nzimmer@sgi.com>
Tested-by: Waiman Long <waiman.long@hp.com>
Tested-by: Daniel J Blueman <daniel@numascale.com>
Acked-by: Pekka Enberg <penberg@kernel.org>
Cc: Robin Holt <robinmholt@gmail.com>
Cc: Nate Zimmer <nzimmer@sgi.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Waiman Long <waiman.long@hp.com>
Cc: Scott Norton <scott.norton@hp.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoocfs2/dlm: fix BUG in dlm_move_lockres_to_recovery_list
Joseph Qi [Fri, 25 Mar 2016 21:21:29 +0000 (14:21 -0700)]
ocfs2/dlm: fix BUG in dlm_move_lockres_to_recovery_list

[ Upstream commit be12b299a83fc807bbaccd2bcb8ec50cbb0cb55c ]

When master handles convert request, it queues ast first and then
returns status.  This may happen that the ast is sent before the request
status because the above two messages are sent by two threads.  And
right after the ast is sent, if master down, it may trigger BUG in
dlm_move_lockres_to_recovery_list in the requested node because ast
handler moves it to grant list without clear lock->convert_pending.  So
remove BUG_ON statement and check if the ast is processed in
dlmconvert_remote.

Signed-off-by: Joseph Qi <joseph.qi@huawei.com>
Reported-by: Yiwen Jiang <jiangyiwen@huawei.com>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Tariq Saeed <tariq.x.saeed@oracle.com>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoocfs2/dlm: fix race between convert and recovery
Joseph Qi [Fri, 25 Mar 2016 21:21:26 +0000 (14:21 -0700)]
ocfs2/dlm: fix race between convert and recovery

[ Upstream commit ac7cf246dfdbec3d8fed296c7bf30e16f5099dac ]

There is a race window between dlmconvert_remote and
dlm_move_lockres_to_recovery_list, which will cause a lock with
OCFS2_LOCK_BUSY in grant list, thus system hangs.

dlmconvert_remote
{
        spin_lock(&res->spinlock);
        list_move_tail(&lock->list, &res->converting);
        lock->convert_pending = 1;
        spin_unlock(&res->spinlock);

        status = dlm_send_remote_convert_request();
        >>>>>> race window, master has queued ast and return DLM_NORMAL,
               and then down before sending ast.
               this node detects master down and calls
               dlm_move_lockres_to_recovery_list, which will revert the
               lock to grant list.
               Then OCFS2_LOCK_BUSY won't be cleared as new master won't
               send ast any more because it thinks already be authorized.

        spin_lock(&res->spinlock);
        lock->convert_pending = 0;
        if (status != DLM_NORMAL)
                dlm_revert_pending_convert(res, lock);
        spin_unlock(&res->spinlock);
}

In this case, check if res->state has DLM_LOCK_RES_RECOVERING bit set
(res is still in recovering) or res master changed (new master has
finished recovery), reset the status to DLM_RECOVERING, then it will
retry convert.

Signed-off-by: Joseph Qi <joseph.qi@huawei.com>
Reported-by: Yiwen Jiang <jiangyiwen@huawei.com>
Reviewed-by: Junxiao Bi <junxiao.bi@oracle.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Tariq Saeed <tariq.x.saeed@oracle.com>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoInput: ati_remote2 - fix crashes on detecting device with invalid descriptor
Vladis Dronov [Wed, 23 Mar 2016 18:53:46 +0000 (11:53 -0700)]
Input: ati_remote2 - fix crashes on detecting device with invalid descriptor

[ Upstream commit 950336ba3e4a1ffd2ca60d29f6ef386dd2c7351d ]

The ati_remote2 driver expects at least two interfaces with one
endpoint each. If given malicious descriptor that specify one
interface or no endpoints, it will crash in the probe function.
Ensure there is at least two interfaces and one endpoint for each
interface before using it.

The full disclosure: http://seclists.org/bugtraq/2016/Mar/90

Reported-by: Ralf Spenneberg <ralf@spenneberg.net>
Signed-off-by: Vladis Dronov <vdronov@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agoideapad-laptop: Add ideapad Y700 (15) to the no_hw_rfkill DMI list
John Dahlstrom [Sat, 27 Feb 2016 06:09:58 +0000 (00:09 -0600)]
ideapad-laptop: Add ideapad Y700 (15) to the no_hw_rfkill DMI list

[ Upstream commit 4db9675d927a71faa66e5ab128d2390d6329750b ]

Some Lenovo ideapad models lack a physical rfkill switch.
On Lenovo models ideapad Y700 Touch-15ISK and ideapad Y700-15ISK,
ideapad-laptop would wrongly report all radios as blocked by
hardware which caused wireless network connections to fail.

Add these models without an rfkill switch to the no_hw_rfkill list.

Signed-off-by: John Dahlstrom <jodarom@sdf.org>
Cc: <stable@vger.kernel.org> # 3.17.x-: 4fa9dab: ideapad_laptop: Lenovo G50-30 fix rfkill reports wireless blocked
Cc: <stable@vger.kernel.org>
Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agostaging: comedi: ni_mio_common: fix the ni_write[blw]() functions
H Hartley Sweeten [Tue, 22 Mar 2016 17:04:48 +0000 (10:04 -0700)]
staging: comedi: ni_mio_common: fix the ni_write[blw]() functions

[ Upstream commit bd3a3cd6c27b117fb9a43a38c8072c95332beecc ]

Memory mapped io (dev->mmio) should not also be writing to the ioport
(dev->iobase) registers. Add the missing 'else' to these functions.

Fixes: 0953ee4acca0 ("staging: comedi: ni_mio_common: checkpatch.pl cleanup (else not useful)")
Cc: <stable@vger.kernel.org> # 3.17+
Signed-off-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Reviewed-by: Ian Abbott <abbotti@mev.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
9 years agorapidio/rionet: fix deadlock on SMP
Aurelien Jacquiot [Tue, 22 Mar 2016 21:25:42 +0000 (14:25 -0700)]
rapidio/rionet: fix deadlock on SMP

[ Upstream commit 36915976eca58f2eefa040ba8f9939672564df61 ]

Fix deadlocking during concurrent receive and transmit operations on SMP
platforms caused by the use of incorrect lock: on transmit 'tx_lock'
spinlock should be used instead of 'lock' which is used for receive
operation.

This fix is applicable to kernel versions starting from v2.15.

Signed-off-by: Aurelien Jacquiot <a-jacquiot@ti.com>
Signed-off-by: Alexandre Bounine <alexandre.bounine@idt.com>
Cc: Matt Porter <mporter@kernel.crashing.org>
Cc: Andre van Herk <andre.van.herk@prodrive-technologies.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>