commit 0758142f8a850928b886978753b1d8c473d6eee2
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Thu Jul 17 15:55:52 2014 -0700

    Linux 3.4.99

commit cd0c28d2281c65aa0ae89be80abcbf8e21c885b4
Author: Lan Tianyu <tianyu.lan@intel.com>
Date:   Mon Jul 7 15:47:12 2014 +0800

    ACPI / battery: Retry to get battery information if failed during probing
    
    commit 75646e758a0ecbed5024454507d5be5b9ea9dcbf upstream.
    
    Some machines (eg. Lenovo Z480) ECs are not stable during boot up
    and causes battery driver fails to be loaded due to failure of getting
    battery information from EC sometimes. After several retries, the
    operation will work. This patch is to retry to get battery information 5
    times if the first try fails.
    
    [ backport to 3.14.5: removed second parameter in acpi_battery_update(),
    introduced by the commit 9e50bc14a7f58b5d8a55973b2d69355852ae2dae (ACPI /
    battery: Accelerate battery resume callback)]
    
    [naszar <naszar@ya.ru>: backport to 3.14.5]
    Link: https://bugzilla.kernel.org/show_bug.cgi?id=75581
    Reported-and-tested-by: naszar <naszar@ya.ru>
    Cc: All applicable <stable@vger.kernel.org>
    Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
    Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit d824be9bc8017b84364f7e5f207fd29119270196
Author: Roland Dreier <roland@purestorage.com>
Date:   Fri May 2 11:18:41 2014 -0700

    x86, ioremap: Speed up check for RAM pages
    
    commit c81c8a1eeede61e92a15103748c23d100880cc8a upstream.
    
    In __ioremap_caller() (the guts of ioremap), we loop over the range of
    pfns being remapped and checks each one individually with page_is_ram().
    For large ioremaps, this can be very slow.  For example, we have a
    device with a 256 GiB PCI BAR, and ioremapping this BAR can take 20+
    seconds -- sometimes long enough to trigger the soft lockup detector!
    
    Internally, page_is_ram() calls walk_system_ram_range() on a single
    page.  Instead, we can make a single call to walk_system_ram_range()
    from __ioremap_caller(), and do our further checks only for any RAM
    pages that we find.  For the common case of MMIO, this saves an enormous
    amount of work, since the range being ioremapped doesn't intersect
    system RAM at all.
    
    With this change, ioremap on our 256 GiB BAR takes less than 1 second.
    
    Signed-off-by: Roland Dreier <roland@purestorage.com>
    Link: http://lkml.kernel.org/r/1399054721-1331-1-git-send-email-roland@kernel.org
    Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2a77794da631ec2b2d2243b83fa6793676cdf411
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Wed Jun 11 18:44:04 2014 +0000

    rtmutex: Plug slow unlock race
    
    commit 27e35715df54cbc4f2d044f681802ae30479e7fb upstream.
    
    When the rtmutex fast path is enabled the slow unlock function can
    create the following situation:
    
    spin_lock(foo->m->wait_lock);
    foo->m->owner = NULL;
    	    			rt_mutex_lock(foo->m); <-- fast path
    				free = atomic_dec_and_test(foo->refcnt);
    				rt_mutex_unlock(foo->m); <-- fast path
    				if (free)
    				   kfree(foo);
    
    spin_unlock(foo->m->wait_lock); <--- Use after free.
    
    Plug the race by changing the slow unlock to the following scheme:
    
         while (!rt_mutex_has_waiters(m)) {
         	    /* Clear the waiters bit in m->owner */
    	    clear_rt_mutex_waiters(m);
          	    owner = rt_mutex_owner(m);
          	    spin_unlock(m->wait_lock);
          	    if (cmpxchg(m->owner, owner, 0) == owner)
          	       return;
          	    spin_lock(m->wait_lock);
         }
    
    So in case of a new waiter incoming while the owner tries the slow
    path unlock we have two situations:
    
     unlock(wait_lock);
    					lock(wait_lock);
     cmpxchg(p, owner, 0) == owner
     	    	   			mark_rt_mutex_waiters(lock);
    	 				acquire(lock);
    
    Or:
    
     unlock(wait_lock);
    					lock(wait_lock);
    	 				mark_rt_mutex_waiters(lock);
     cmpxchg(p, owner, 0) != owner
    					enqueue_waiter();
    					unlock(wait_lock);
     lock(wait_lock);
     wakeup_next waiter();
     unlock(wait_lock);
    					lock(wait_lock);
    					acquire(lock);
    
    If the fast path is disabled, then the simple
    
       m->owner = NULL;
       unlock(m->wait_lock);
    
    is sufficient as all access to m->owner is serialized via
    m->wait_lock;
    
    Also document and clarify the wakeup_next_waiter function as suggested
    by Oleg Nesterov.
    
    Reported-by: Steven Rostedt <rostedt@goodmis.org>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Link: http://lkml.kernel.org/r/20140611183852.937945560@linutronix.de
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ea018da95368adfb700689bd9842714f7c3db665
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Thu Jun 5 12:34:23 2014 +0200

    rtmutex: Handle deadlock detection smarter
    
    commit 3d5c9340d1949733eb37616abd15db36aef9a57c upstream.
    
    Even in the case when deadlock detection is not requested by the
    caller, we can detect deadlocks. Right now the code stops the lock
    chain walk and keeps the waiter enqueued, even on itself. Silly not to
    yell when such a scenario is detected and to keep the waiter enqueued.
    
    Return -EDEADLK unconditionally and handle it at the call sites.
    
    The futex calls return -EDEADLK. The non futex ones dequeue the
    waiter, throw a warning and put the task into a schedule loop.
    
    Tagged for stable as it makes the code more robust.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: Steven Rostedt <rostedt@goodmis.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Brad Mouring <bmouring@ni.com>
    Link: http://lkml.kernel.org/r/20140605152801.836501969@linutronix.de
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 307e2e09be9993d7fe403a7310b28ab2d8e2f6ce
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Thu Jun 5 11:16:12 2014 +0200

    rtmutex: Detect changes in the pi lock chain
    
    commit 82084984383babe728e6e3c9a8e5c46278091315 upstream.
    
    When we walk the lock chain, we drop all locks after each step. So the
    lock chain can change under us before we reacquire the locks. That's
    harmless in principle as we just follow the wrong lock path. But it
    can lead to a false positive in the dead lock detection logic:
    
    T0 holds L0
    T0 blocks on L1 held by T1
    T1 blocks on L2 held by T2
    T2 blocks on L3 held by T3
    T4 blocks on L4 held by T4
    
    Now we walk the chain
    
    lock T1 -> lock L2 -> adjust L2 -> unlock T1 ->
         lock T2 ->  adjust T2 ->  drop locks
    
    T2 times out and blocks on L0
    
    Now we continue:
    
    lock T2 -> lock L0 -> deadlock detected, but it's not a deadlock at all.
    
    Brad tried to work around that in the deadlock detection logic itself,
    but the more I looked at it the less I liked it, because it's crystal
    ball magic after the fact.
    
    We actually can detect a chain change very simple:
    
    lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> lock T2 -> adjust T2 ->
    
         next_lock = T2->pi_blocked_on->lock;
    
    drop locks
    
    T2 times out and blocks on L0
    
    Now we continue:
    
    lock T2 ->
    
         if (next_lock != T2->pi_blocked_on->lock)
         	   return;
    
    So if we detect that T2 is now blocked on a different lock we stop the
    chain walk. That's also correct in the following scenario:
    
    lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> lock T2 -> adjust T2 ->
    
         next_lock = T2->pi_blocked_on->lock;
    
    drop locks
    
    T3 times out and drops L3
    T2 acquires L3 and blocks on L4 now
    
    Now we continue:
    
    lock T2 ->
    
         if (next_lock != T2->pi_blocked_on->lock)
         	   return;
    
    We don't have to follow up the chain at that point, because T2
    propagated our priority up to T4 already.
    
    [ Folded a cleanup patch from peterz ]
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reported-by: Brad Mouring <bmouring@ni.com>
    Cc: Steven Rostedt <rostedt@goodmis.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Link: http://lkml.kernel.org/r/20140605152801.930031935@linutronix.de
    Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 90b421b52720b30644e104da002505f08a77c07a
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Thu May 22 03:25:39 2014 +0000

    rtmutex: Fix deadlock detector for real
    
    commit 397335f004f41e5fcf7a795e94eb3ab83411a17c upstream.
    
    The current deadlock detection logic does not work reliably due to the
    following early exit path:
    
    	/*
    	 * Drop out, when the task has no waiters. Note,
    	 * top_waiter can be NULL, when we are in the deboosting
    	 * mode!
    	 */
    	if (top_waiter && (!task_has_pi_waiters(task) ||
    			   top_waiter != task_top_pi_waiter(task)))
    		goto out_unlock_pi;
    
    So this not only exits when the task has no waiters, it also exits
    unconditionally when the current waiter is not the top priority waiter
    of the task.
    
    So in a nested locking scenario, it might abort the lock chain walk
    and therefor miss a potential deadlock.
    
    Simple fix: Continue the chain walk, when deadlock detection is
    enabled.
    
    We also avoid the whole enqueue, if we detect the deadlock right away
    (A-A). It's an optimization, but also prevents that another waiter who
    comes in after the detection and before the task has undone the damage
    observes the situation and detects the deadlock and returns
    -EDEADLOCK, which is wrong as the other task is not in a deadlock
    situation.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
    Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
    Link: http://lkml.kernel.org/r/20140522031949.725272460@linutronix.de
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit bf09db97205d46b3e083582eb2799aedddd9953b
Author: Steven Rostedt (Red Hat) <rostedt@goodmis.org>
Date:   Tue Jun 24 23:50:09 2014 -0400

    tracing: Remove ftrace_stop/start() from reading the trace file
    
    commit 099ed151675cd1d2dbeae1dac697975f6a68716d upstream.
    
    Disabling reading and writing to the trace file should not be able to
    disable all function tracing callbacks. There's other users today
    (like kprobes and perf). Reading a trace file should not stop those
    from happening.
    
    Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
    Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit fa1bd9b16b3432bce7937a5a4b5d75ab2f5b634c
Author: Christian König <christian.koenig@amd.com>
Date:   Wed Jun 4 15:29:56 2014 +0200

    drm/radeon: stop poisoning the GART TLB
    
    commit 0986c1a55ca64b44ee126a2f719a6e9f28cbe0ed upstream.
    
    When we set the valid bit on invalid GART entries they are
    loaded into the TLB when an adjacent entry is loaded. This
    poisons the TLB with invalid entries which are sometimes
    not correctly removed on TLB flush.
    
    For stable inclusion the patch probably needs to be modified a bit.
    
    Signed-off-by: Christian König <christian.koenig@amd.com>
    Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ef018263c824ec34d06419502555dbd8889e8182
Author: Theodore Ts'o <tytso@mit.edu>
Date:   Sat Jul 5 18:40:52 2014 -0400

    ext4: clarify error count warning messages
    
    commit ae0f78de2c43b6fadd007c231a352b13b5be8ed2 upstream.
    
    Make it clear that values printed are times, and that it is error
    since last fsck. Also add note about fsck version required.
    
    Signed-off-by: Pavel Machek <pavel@ucw.cz>
    Signed-off-by: Theodore Ts'o <tytso@mit.edu>
    Reviewed-by: Andreas Dilger <adilger@dilger.ca>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 7b9eab8f52499099d3b6351a7fd4277455666fa7
Author: Anton Blanchard <anton@samba.org>
Date:   Thu May 29 08:15:38 2014 +1000

    powerpc/perf: Never program book3s PMCs with values >= 0x80000000
    
    commit f56029410a13cae3652d1f34788045c40a13ffc7 upstream.
    
    We are seeing a lot of PMU warnings on POWER8:
    
        Can't find PMC that caused IRQ
    
    Looking closer, the active PMC is 0 at this point and we took a PMU
    exception on the transition from negative to 0. Some versions of POWER8
    have an issue where they edge detect and not level detect PMC overflows.
    
    A number of places program the PMC with (0x80000000 - period_left),
    where period_left can be negative. We can either fix all of these or
    just ensure that period_left is always >= 1.
    
    This patch takes the second option.
    
    Signed-off-by: Anton Blanchard <anton@samba.org>
    Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 30bbd3940f341bfd836e3afcac424c5a9ae19115
Author: Axel Lin <axel.lin@ingics.com>
Date:   Wed Jul 2 08:29:55 2014 +0800

    hwmon: (adm1029) Ensure the fan_div cache is updated in set_fan_div
    
    commit 1035a9e3e9c76b64a860a774f5b867d28d34acc2 upstream.
    
    Writing to fanX_div does not clear the cache. As a result, reading
    from fanX_div may return the old value for up to two seconds
    after writing a new value.
    
    This patch ensures the fan_div cache is updated in set_fan_div().
    
    Reported-by: Guenter Roeck <linux@roeck-us.net>
    Signed-off-by: Axel Lin <axel.lin@ingics.com>
    Signed-off-by: Guenter Roeck <linux@roeck-us.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 5cc80fb26bb491e88f6aa8761ba0371693f73bd8
Author: Axel Lin <axel.lin@ingics.com>
Date:   Wed Jul 2 07:44:44 2014 +0800

    hwmon: (amc6821) Fix permissions for temp2_input
    
    commit df86754b746e9a0ff6f863f690b1c01d408e3cdc upstream.
    
    temp2_input should not be writable, fix it.
    
    Reported-by: Guenter Roeck <linux@roeck-us.net>
    Signed-off-by: Axel Lin <axel.lin@ingics.com>
    Signed-off-by: Guenter Roeck <linux@roeck-us.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f54e04114e99817aae34261901fa5b54e6e2c336
Author: Gu Zheng <guz.fnst@cn.fujitsu.com>
Date:   Wed Jun 25 09:57:18 2014 +0800

    cpuset,mempolicy: fix sleeping function called from invalid context
    
    commit 391acf970d21219a2a5446282d3b20eace0c0d7a upstream.
    
    When runing with the kernel(3.15-rc7+), the follow bug occurs:
    [ 9969.258987] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:586
    [ 9969.359906] in_atomic(): 1, irqs_disabled(): 0, pid: 160655, name: python
    [ 9969.441175] INFO: lockdep is turned off.
    [ 9969.488184] CPU: 26 PID: 160655 Comm: python Tainted: G       A      3.15.0-rc7+ #85
    [ 9969.581032] Hardware name: FUJITSU-SV PRIMEQUEST 1800E/SB, BIOS PRIMEQUEST 1000 Series BIOS Version 1.39 11/16/2012
    [ 9969.706052]  ffffffff81a20e60 ffff8803e941fbd0 ffffffff8162f523 ffff8803e941fd18
    [ 9969.795323]  ffff8803e941fbe0 ffffffff8109995a ffff8803e941fc58 ffffffff81633e6c
    [ 9969.884710]  ffffffff811ba5dc ffff880405c6b480 ffff88041fdd90a0 0000000000002000
    [ 9969.974071] Call Trace:
    [ 9970.003403]  [<ffffffff8162f523>] dump_stack+0x4d/0x66
    [ 9970.065074]  [<ffffffff8109995a>] __might_sleep+0xfa/0x130
    [ 9970.130743]  [<ffffffff81633e6c>] mutex_lock_nested+0x3c/0x4f0
    [ 9970.200638]  [<ffffffff811ba5dc>] ? kmem_cache_alloc+0x1bc/0x210
    [ 9970.272610]  [<ffffffff81105807>] cpuset_mems_allowed+0x27/0x140
    [ 9970.344584]  [<ffffffff811b1303>] ? __mpol_dup+0x63/0x150
    [ 9970.409282]  [<ffffffff811b1385>] __mpol_dup+0xe5/0x150
    [ 9970.471897]  [<ffffffff811b1303>] ? __mpol_dup+0x63/0x150
    [ 9970.536585]  [<ffffffff81068c86>] ? copy_process.part.23+0x606/0x1d40
    [ 9970.613763]  [<ffffffff810bf28d>] ? trace_hardirqs_on+0xd/0x10
    [ 9970.683660]  [<ffffffff810ddddf>] ? monotonic_to_bootbased+0x2f/0x50
    [ 9970.759795]  [<ffffffff81068cf0>] copy_process.part.23+0x670/0x1d40
    [ 9970.834885]  [<ffffffff8106a598>] do_fork+0xd8/0x380
    [ 9970.894375]  [<ffffffff81110e4c>] ? __audit_syscall_entry+0x9c/0xf0
    [ 9970.969470]  [<ffffffff8106a8c6>] SyS_clone+0x16/0x20
    [ 9971.030011]  [<ffffffff81642009>] stub_clone+0x69/0x90
    [ 9971.091573]  [<ffffffff81641c29>] ? system_call_fastpath+0x16/0x1b
    
    The cause is that cpuset_mems_allowed() try to take
    mutex_lock(&callback_mutex) under the rcu_read_lock(which was hold in
    __mpol_dup()). And in cpuset_mems_allowed(), the access to cpuset is
    under rcu_read_lock, so in __mpol_dup, we can reduce the rcu_read_lock
    protection region to protect the access to cpuset only in
    current_cpuset_is_being_rebound(). So that we can avoid this bug.
    
    This patch is a temporary solution that just addresses the bug
    mentioned above, can not fix the long-standing issue about cpuset.mems
    rebinding on fork():
    
    "When the forker's task_struct is duplicated (which includes
     ->mems_allowed) and it races with an update to cpuset_being_rebound
     in update_tasks_nodemask() then the task's mems_allowed doesn't get
     updated. And the child task's mems_allowed can be wrong if the
     cpuset's nodemask changes before the child has been added to the
     cgroup's tasklist."
    
    Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
    Acked-by: Li Zefan <lizefan@huawei.com>
    Signed-off-by: Tejun Heo <tj@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit d06191b3d67e11142db2d601a8393535e58de5e3
Author: Bert Vermeulen <bert@biot.com>
Date:   Tue Jul 8 14:42:23 2014 +0200

    USB: ftdi_sio: Add extra PID.
    
    commit 5a7fbe7e9ea0b1b9d7ffdba64db1faa3a259164c upstream.
    
    This patch adds PID 0x0003 to the VID 0x128d (Testo). At least the
    Testo 435-4 uses this, likely other gear as well.
    
    Signed-off-by: Bert Vermeulen <bert@biot.com>
    Cc: Johan Hovold <johan@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit d45a2380fbf09c49b2f595171ef6b89fb019445b
Author: Andras Kovacs <andras@sth.sze.hu>
Date:   Fri Jun 27 14:50:11 2014 +0200

    USB: cp210x: add support for Corsair usb dongle
    
    commit b9326057a3d8447f5d2e74a7b521ccf21add2ec0 upstream.
    
    Corsair USB Dongles are shipped with Corsair AXi series PSUs.
    These are cp210x serial usb devices, so make driver detect these.
    I have a program, that can get information from these PSUs.
    
    Tested with 2 different dongles shipped with Corsair AX860i and
    AX1200i units.
    
    Signed-off-by: Andras Kovacs <andras@sth.sze.hu>
    Signed-off-by: Johan Hovold <johan@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit fd7afe29185bf592cb13a698f9a34dc60758447e
Author: Bernd Wachter <bernd.wachter@jolla.com>
Date:   Wed Jul 2 12:36:48 2014 +0300

    usb: option: Add ID for Telewell TW-LTE 4G v2
    
    commit 3d28bd840b2d3981cd28caf5fe1df38f1344dd60 upstream.
    
    Add ID of the Telewell 4G v2 hardware to option driver to get legacy
    serial interface working
    
    Signed-off-by: Bernd Wachter <bernd.wachter@jolla.com>
    Signed-off-by: Johan Hovold <johan@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>