From 27abd5b6ac4eee7f39f2292be062acb986e1b36a Mon Sep 17 00:00:00 2001 From: mmotm auto import Date: Wed, 18 Jan 2017 23:01:33 +0000 Subject: [PATCH] origin GIT fa19a769f82fb9a5ca000b83cacd13fcaeda51ac commit 5eb7c0d04f04a667c049fe090a95494a8de2955c Author: Larry Finger Date: Sun Jan 1 20:25:25 2017 -0600 taint/module: Fix problems when out-of-kernel driver defines true or false Commit 7fd8329ba502 ("taint/module: Clean up global and module taint flags handling") used the key words true and false as character members of a new struct. These names cause problems when out-of-kernel modules such as VirtualBox include their own definitions of true and false. Fixes: 7fd8329ba502 ("taint/module: Clean up global and module taint flags handling") Signed-off-by: Larry Finger Cc: Petr Mladek Cc: Jessica Yu Cc: Rusty Russell Reported-by: Valdis Kletnieks Reviewed-by: Petr Mladek Acked-by: Rusty Russell Signed-off-by: Jessica Yu commit 4e71de7986386d5fd3765458f27d612931f27f5e Author: Zhou Chengming Date: Mon Jan 16 11:21:11 2017 +0800 perf/x86/intel: Handle exclusive threadid correctly on CPU hotplug The CPU hotplug function intel_pmu_cpu_starting() sets cpu_hw_events.excl_thread_id unconditionally to 1 when the shared exclusive counters data structure is already availabe for the sibling thread. This works during the boot process because the first sibling gets threadid 0 assigned and the second sibling which shares the data structure gets 1. But when the first thread of the core is offlined and onlined again it shares the data structure with the second thread and gets exclusive thread id 1 assigned as well. Prevent this by checking the threadid of the already online thread. [ tglx: Rewrote changelog ] Signed-off-by: Zhou Chengming Cc: NuoHan Qiao Cc: ak@linux.intel.com Cc: peterz@infradead.org Cc: kan.liang@intel.com Cc: dave.hansen@linux.intel.com Cc: eranian@google.com Cc: qiaonuohan@huawei.com Cc: davidcc@google.com Cc: guohanjun@huawei.com Link: http://lkml.kernel.org/r/1484536871-3131-1-git-send-email-zhouchengming1@huawei.com Signed-off-by: Thomas Gleixner --- --- arch/x86/events/intel/core.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) commit bc7c36eedb0c7004aa06c2afc3c5385adada8fa3 Author: Joonyoung Shim Date: Tue Jan 17 13:54:36 2017 +0900 clocksource/exynos_mct: Clear interrupt when cpu is shut down When a CPU goes offline a potentially pending timer interrupt is not cleared. When the CPU comes online again then the pending interrupt is delivered before the per cpu clockevent device is initialized. As a consequence the tick interrupt handler dereferences a NULL pointer. [ 51.251378] Unable to handle kernel NULL pointer dereference at virtual address 00000040 [ 51.289348] task: ee942d00 task.stack: ee960000 [ 51.293861] PC is at tick_periodic+0x38/0xb0 [ 51.298102] LR is at tick_handle_periodic+0x1c/0x90 Clear the pending interrupt in the cpu dying path. Fixes: 56a94f13919c ("clocksource: exynos_mct: Avoid blocking calls in the cpu hotplug notifier") Reported-by: Seung-Woo Kim Signed-off-by: Joonyoung Shim Cc: linux-samsung-soc@vger.kernel.org Cc: cw00.choi@samsung.com Cc: daniel.lezcano@linaro.org Cc: stable@vger.kernel.org Cc: javier@osg.samsung.com Cc: kgene@kernel.org Cc: krzk@kernel.org Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/1484628876-22065-1-git-send-email-jy0922.shim@samsung.com Signed-off-by: Thomas Gleixner commit 0faa9cb5b3836a979864a6357e01d2046884ad52 Author: Jamal Hadi Salim Date: Sun Jan 15 10:14:06 2017 -0500 net sched actions: fix refcnt when GETing of action after bind Demonstrating the issue: .. add a drop action $sudo $TC actions add action drop index 10 .. retrieve it $ sudo $TC -s actions get action gact index 10 action order 1: gact action drop random type none pass val 0 index 10 ref 2 bind 0 installed 29 sec used 29 sec Action statistics: Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 ... bug 1 above: reference is two. Reference is actually 1 but we forget to subtract 1. ... do a GET again and we see the same issue try a few times and nothing changes ~$ sudo $TC -s actions get action gact index 10 action order 1: gact action drop random type none pass val 0 index 10 ref 2 bind 0 installed 31 sec used 31 sec Action statistics: Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 ... lets try to bind the action to a filter.. $ sudo $TC qdisc add dev lo ingress $ sudo $TC filter add dev lo parent ffff: protocol ip prio 1 \ u32 match ip dst 127.0.0.1/32 flowid 1:1 action gact index 10 ... and now a few GETs: $ sudo $TC -s actions get action gact index 10 action order 1: gact action drop random type none pass val 0 index 10 ref 3 bind 1 installed 204 sec used 204 sec Action statistics: Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 $ sudo $TC -s actions get action gact index 10 action order 1: gact action drop random type none pass val 0 index 10 ref 4 bind 1 installed 206 sec used 206 sec Action statistics: Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 $ sudo $TC -s actions get action gact index 10 action order 1: gact action drop random type none pass val 0 index 10 ref 5 bind 1 installed 235 sec used 235 sec Action statistics: Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 .... as can be observed the reference count keeps going up. After the fix $ sudo $TC actions add action drop index 10 $ sudo $TC -s actions get action gact index 10 action order 1: gact action drop random type none pass val 0 index 10 ref 1 bind 0 installed 4 sec used 4 sec Action statistics: Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 $ sudo $TC -s actions get action gact index 10 action order 1: gact action drop random type none pass val 0 index 10 ref 1 bind 0 installed 6 sec used 6 sec Action statistics: Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 $ sudo $TC qdisc add dev lo ingress $ sudo $TC filter add dev lo parent ffff: protocol ip prio 1 \ u32 match ip dst 127.0.0.1/32 flowid 1:1 action gact index 10 $ sudo $TC -s actions get action gact index 10 action order 1: gact action drop random type none pass val 0 index 10 ref 2 bind 1 installed 32 sec used 32 sec Action statistics: Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 $ sudo $TC -s actions get action gact index 10 action order 1: gact action drop random type none pass val 0 index 10 ref 2 bind 1 installed 33 sec used 33 sec Action statistics: Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 Fixes: aecc5cefc389 ("net sched actions: fix GETing actions") Signed-off-by: Jamal Hadi Salim Signed-off-by: David S. Miller commit 9577b174cd0323d287c994ef0891db71666d0765 Author: Jack Morgenstein Date: Mon Jan 16 18:31:39 2017 +0200 net/mlx4_core: Eliminate warning messages for SRQ_LIMIT under SRIOV When running SRIOV, warnings for SRQ LIMIT events flood the Hypervisor's message log when (correct, normally operating) apps use SRQ LIMIT events as a trigger to post WQEs to SRQs. Add more information to the existing debug printout for SRQ_LIMIT, and output the warning messages only for the SRQ CATAS ERROR event. Fixes: acba2420f9d2 ("mlx4_core: Add wrapper functions and comm channel and slave event support to EQs") Fixes: e0debf9cb50d ("mlx4_core: Reduce warning message for SRQ_LIMIT event to debug level") Signed-off-by: Jack Morgenstein Signed-off-by: Tariq Toukan Signed-off-by: David S. Miller commit 7c3945bc2073554bb2ecf983e073dee686679c53 Author: Jack Morgenstein Date: Mon Jan 16 18:31:38 2017 +0200 net/mlx4_core: Fix when to save some qp context flags for dynamic VST to VGT transitions Save the qp context flags byte containing the flag disabling vlan stripping in the RESET to INIT qp transition, rather than in the INIT to RTR transition. Per the firmware spec, the flags in this byte are active in the RESET to INIT transition. As a result of saving the flags in the incorrect qp transition, when switching dynamically from VGT to VST and back to VGT, the vlan remained stripped (as is required for VST) and did not return to not-stripped (as is required for VGT). Fixes: f0f829bf42cd ("net/mlx4_core: Add immediate activate for VGT->VST->VGT") Signed-off-by: Jack Morgenstein Signed-off-by: Tariq Toukan Signed-off-by: David S. Miller commit 291c566a28910614ce42d0ffe82196eddd6346f4 Author: Jack Morgenstein Date: Mon Jan 16 18:31:37 2017 +0200 net/mlx4_core: Fix racy CQ (Completion Queue) free In function mlx4_cq_completion() and mlx4_cq_event(), the radix_tree_lookup requires a rcu_read_lock. This is mandatory: if another core frees the CQ, it could run the radix_tree_node_rcu_free() call_rcu() callback while its being used by the radix tree lookup function. Additionally, in function mlx4_cq_event(), since we are adding the rcu lock around the radix-tree lookup, we no longer need to take the spinlock. Also, the synchronize_irq() call for the async event eliminates the need for incrementing the cq reference count in mlx4_cq_event(). Other changes: 1. In function mlx4_cq_free(), replace spin_lock_irq with spin_lock: we no longer take this spinlock in the interrupt context. The spinlock here, therefore, simply protects against different threads simultaneously invoking mlx4_cq_free() for different cq's. 2. In function mlx4_cq_free(), we move the radix tree delete to before the synchronize_irq() calls. This guarantees that we will not access this cq during any subsequent interrupts, and therefore can safely free the CQ after the synchronize_irq calls. The rcu_read_lock in the interrupt handlers only needs to protect against corrupting the radix tree; the interrupt handlers may access the cq outside the rcu_read_lock due to the synchronize_irq calls which protect against premature freeing of the cq. 3. In function mlx4_cq_event(), we change the mlx_warn message to mlx4_dbg. 4. We leave the cq reference count mechanism in place, because it is still needed for the cq completion tasklet mechanism. Fixes: 6d90aa5cf17b ("net/mlx4_core: Make sure there are no pending async events when freeing CQ") Fixes: 225c7b1feef1 ("IB/mlx4: Add a driver Mellanox ConnectX InfiniBand adapters") Signed-off-by: Jack Morgenstein Signed-off-by: Matan Barak Signed-off-by: Tariq Toukan Signed-off-by: David S. Miller commit b618ab4561d40664492cf9f9507f19a1c8272970 Author: Heiner Kallweit Date: Sun Jan 15 19:19:00 2017 +0100 net: stmmac: don't use netdev_[dbg, info, ..] before net_device is registered Don't use netdev_info and friends before the net_device is registered. This avoids ugly messages like "meson8b-dwmac c9410000.ethernet (unnamed net_device) (uninitialized): Enable RX Mitigation via HW Watchdog Timer" Signed-off-by: Heiner Kallweit Signed-off-by: David S. Miller commit abeffce90c7f6ce74de9794ad0977a168edf8ef6 Author: Arnd Bergmann Date: Sun Jan 15 19:50:46 2017 +0200 net/mlx5e: Fix a -Wmaybe-uninitialized warning As found by Olof's build bot, we gain a harmless warning about a potential uninitialized variable reference in mlx5: drivers/net/ethernet/mellanox/mlx5/core/en_tc.c: In function 'parse_tc_fdb_actions': drivers/net/ethernet/mellanox/mlx5/core/en_tc.c:769:13: warning: 'out_dev' may be used uninitialized in this function [-Wmaybe-uninitialized] drivers/net/ethernet/mellanox/mlx5/core/en_tc.c:811:21: note: 'out_dev' was declared here This was introduced through the addition of an 'IS_ERR/PTR_ERR' pair that gcc is unfortunately unable to completely figure out. The problem being gcc cannot tell that if(IS_ERR()) in mlx5e_route_lookup_ipv4() is equivalent to checking if(err) later, so it assumes that 'out_dev' is used after the 'return PTR_ERR(rt)'. The PTR_ERR_OR_ZERO() case by comparison is fairly easy to detect by gcc, so it can't get that wrong, so it no longer warns. Hadar Hen Zion already attempted to fix the warning earlier by adding fake initializations, but that ended up not fully addressing all warnings, so I'm reverting it now that it is no longer needed. Link: http://arm-soc.lixom.net/buildlogs/mainline/v4.10-rc3-98-gcff3b2c/ Fixes: a42485eb0ee4 ("net/mlx5e: TC ipv4 tunnel encap offload error flow fixes") Fixes: a757d108dc1a ("net/mlx5e: Fix kbuild warnings for uninitialized parameters") Signed-off-by: Arnd Bergmann Signed-off-by: Or Gerlitz Signed-off-by: David S. Miller commit 8a367e74c0120ef68c8c70d5a025648c96626dff Author: Basil Gunn Date: Sat Jan 14 12:18:55 2017 -0800 ax25: Fix segfault after sock connection timeout The ax.25 socket connection timed out & the sock struct has been previously taken down ie. sock struct is now a NULL pointer. Checking the sock_flag causes the segfault. Check if the socket struct pointer is NULL before checking sock_flag. This segfault is seen in timed out netrom connections. Please submit to -stable. Signed-off-by: Basil Gunn Signed-off-by: David S. Miller commit f1f7714ea51c56b7163fb1a5acf39c6a204dd758 Author: Daniel Borkmann Date: Fri Jan 13 23:38:15 2017 +0100 bpf: rework prog_digest into prog_tag Commit 7bd509e311f4 ("bpf: add prog_digest and expose it via fdinfo/netlink") was recently discussed, partially due to admittedly suboptimal name of "prog_digest" in combination with sha1 hash usage, thus inevitably and rightfully concerns about its security in terms of collision resistance were raised with regards to use-cases. The intended use cases are for debugging resp. introspection only for providing a stable "tag" over the instruction sequence that both kernel and user space can calculate independently. It's not usable at all for making a security relevant decision. So collisions where two different instruction sequences generate the same tag can happen, but ideally at a rather low rate. The "tag" will be dumped in hex and is short enough to introspect in tracepoints or kallsyms output along with other data such as stack trace, etc. Thus, this patch performs a rename into prog_tag and truncates the tag to a short output (64 bits) to make it obvious it's not collision-free. Should in future a hash or facility be needed with a security relevant focus, then we can think about requirements, constraints, etc that would fit to that situation. For now, rework the exposed parts for the current use cases as long as nothing has been released yet. Tested on x86_64 and s390x. Fixes: 7bd509e311f4 ("bpf: add prog_digest and expose it via fdinfo/netlink") Signed-off-by: Daniel Borkmann Acked-by: Alexei Starovoitov Cc: Andy Lutomirski Signed-off-by: David S. Miller commit 613f050d68a8ed3c0b18b9568698908ef7bbc1f7 Author: Masami Hiramatsu Date: Wed Jan 11 15:01:57 2017 +0900 perf probe: Fix to probe on gcc generated functions in modules Fix to probe on gcc generated functions on modules. Since probing on a module is based on its symbol name, it should be adjusted on actual symbols. E.g. without this fix, perf probe shows probe definition on non-exist symbol as below. $ perf probe -m build-x86_64/net/netfilter/nf_nat.ko -F in_range* in_range.isra.12 $ perf probe -m build-x86_64/net/netfilter/nf_nat.ko -D in_range p:probe/in_range nf_nat:in_range+0 With this fix, perf probe correctly shows a probe on gcc-generated symbol. $ perf probe -m build-x86_64/net/netfilter/nf_nat.ko -D in_range p:probe/in_range nf_nat:in_range.isra.12+0 This also fixes same problem on online module as below. $ perf probe -m i915 -D assert_plane p:probe/assert_plane i915:assert_plane.constprop.134+0 Signed-off-by: Masami Hiramatsu Tested-by: Arnaldo Carvalho de Melo Cc: Jiri Olsa Cc: Namhyung Kim Cc: Peter Zijlstra Link: http://lkml.kernel.org/r/148411450673.9978.14905987549651656075.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo commit 3e96dac7c956089d3f23aca98c4dfca57b6aaf8a Author: Masami Hiramatsu Date: Wed Jan 11 15:00:47 2017 +0900 perf probe: Add error checks to offline probe post-processing Add error check codes on post processing and improve it for offline probe events as: - post processing fails if no matched symbol found in map(-ENOENT) or strdup() failed(-ENOMEM). - Even if the symbol name is the same, it updates symbol address and offset. Signed-off-by: Masami Hiramatsu Cc: Jiri Olsa Cc: Namhyung Kim Cc: Peter Zijlstra Link: http://lkml.kernel.org/r/148411443738.9978.4617979132625405545.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo commit 57d5f64d83ab5b5a5118b1597386dd76eaf4340d Author: Parthasarathy Bhuvaragan Date: Fri Jan 13 15:46:25 2017 +0100 tipc: allocate user memory with GFP_KERNEL flag Until now, we allocate memory always with GFP_ATOMIC flag. When the system is under memory pressure and a user tries to send, the send fails due to low memory. However, the user application can wait for free memory if we allocate it using GFP_KERNEL flag. In this commit, we use allocate memory with GFP_KERNEL for all user allocation. Reported-by: Rune Torgersen Acked-by: Jon Maloy Signed-off-by: Parthasarathy Bhuvaragan Signed-off-by: David S. Miller commit 34c55cf2fc75f8bf6ba87df321038c064cf2d426 Author: Karicheri, Muralidharan Date: Fri Jan 13 09:32:34 2017 -0500 net: phy: dp83867: allow RGMII_TXID/RGMII_RXID interface types Currently dp83867 driver returns error if phy interface type PHY_INTERFACE_MODE_RGMII_RXID is used to set the rx only internal delay. Similarly issue happens for PHY_INTERFACE_MODE_RGMII_TXID. Fix this by checking also the interface type if a particular delay value is missing in the phy dt bindings. Also update the DT document accordingly. Signed-off-by: Murali Karicheri Signed-off-by: Sekhar Nori Signed-off-by: David S. Miller commit 02ca0423fd65a0a9c4d70da0dbb8f4b8503f08c7 Author: Jakub Sitnicki Date: Fri Jan 13 10:12:20 2017 +0100 ip6_tunnel: Account for tunnel header in tunnel MTU With ip6gre we have a tunnel header which also makes the tunnel MTU smaller. We need to reserve room for it. Previously we were using up space reserved for the Tunnel Encapsulation Limit option header (RFC 2473). Also, after commit b05229f44228 ("gre6: Cleanup GREv6 transmit path, call common GRE functions") our contract with the caller has changed. Now we check if the packet length exceeds the tunnel MTU after the tunnel header has been pushed, unlike before. This is reflected in the check where we look at the packet length minus the size of the tunnel header, which is already accounted for in tunnel MTU. Fixes: b05229f44228 ("gre6: Cleanup GREv6 transmit path, call common GRE functions") Signed-off-by: Jakub Sitnicki Signed-off-by: David S. Miller commit d2d4edbebe07ddb77980656abe7b9bc7a9e0cdf7 Author: Masami Hiramatsu Date: Wed Jan 11 14:59:38 2017 +0900 perf probe: Fix to show correct locations for events on modules Fix to show correct locations for events on modules by relocating given address instead of retrying after failure. This happens when the module text size is big enough, bigger than sh_addr, because the original code retries with given address + sh_addr if it failed to find CU DIE at the given address. Any address smaller than sh_addr always fails and it retries with the correct address, but addresses bigger than sh_addr will get a CU DIE which is on the given address (not adjusted by sh_addr). In my environment(x86-64), the sh_addr of ".text" section is 0x10030. Since i915 is a huge kernel module, we can see this issue as below. $ grep "[Tt] .*\[i915\]" /proc/kallsyms | sort | head -n1 ffffffffc0270000 t i915_switcheroo_can_switch [i915] ffffffffc0270000 + 0x10030 = ffffffffc0280030, so we'll check symbols cross this boundary. $ grep "[Tt] .*\[i915\]" /proc/kallsyms | grep -B1 ^ffffffffc028\ | head -n 2 ffffffffc027ff80 t haswell_init_clock_gating [i915] ffffffffc0280110 t valleyview_init_clock_gating [i915] So setup probes on both function and see what happen. $ sudo ./perf probe -m i915 -a haswell_init_clock_gating \ -a valleyview_init_clock_gating Added new events: probe:haswell_init_clock_gating (on haswell_init_clock_gating in i915) probe:valleyview_init_clock_gating (on valleyview_init_clock_gating in i915) You can now use it in all perf tools, such as: perf record -e probe:valleyview_init_clock_gating -aR sleep 1 $ sudo ./perf probe -l probe:haswell_init_clock_gating (on haswell_init_clock_gating@gpu/drm/i915/intel_pm.c in i915) probe:valleyview_init_clock_gating (on i915_vga_set_decode:4@gpu/drm/i915/i915_drv.c in i915) As you can see, haswell_init_clock_gating is correctly shown, but valleyview_init_clock_gating is not. With this patch, both events are shown correctly. $ sudo ./perf probe -l probe:haswell_init_clock_gating (on haswell_init_clock_gating@gpu/drm/i915/intel_pm.c in i915) probe:valleyview_init_clock_gating (on valleyview_init_clock_gating@gpu/drm/i915/intel_pm.c in i915) Committer notes: In my case: # perf probe -m i915 -a haswell_init_clock_gating -a valleyview_init_clock_gating Added new events: probe:haswell_init_clock_gating (on haswell_init_clock_gating in i915) probe:valleyview_init_clock_gating (on valleyview_init_clock_gating in i915) You can now use it in all perf tools, such as: perf record -e probe:valleyview_init_clock_gating -aR sleep 1 # perf probe -l probe:haswell_init_clock_gating (on i915_getparam+432@gpu/drm/i915/i915_drv.c in i915) probe:valleyview_init_clock_gating (on __i915_printk+240@gpu/drm/i915/i915_drv.c in i915) # # readelf -SW /lib/modules/4.9.0+/build/vmlinux | egrep -w '.text|Name' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [ 1] .text PROGBITS ffffffff81000000 200000 822fd3 00 AX 0 0 4096 # So both are b0rked, now with the fix: # perf probe -m i915 -a haswell_init_clock_gating -a valleyview_init_clock_gating Added new events: probe:haswell_init_clock_gating (on haswell_init_clock_gating in i915) probe:valleyview_init_clock_gating (on valleyview_init_clock_gating in i915) You can now use it in all perf tools, such as: perf record -e probe:valleyview_init_clock_gating -aR sleep 1 # perf probe -l probe:haswell_init_clock_gating (on haswell_init_clock_gating@gpu/drm/i915/intel_pm.c in i915) probe:valleyview_init_clock_gating (on valleyview_init_clock_gating@gpu/drm/i915/intel_pm.c in i915) # Both looks correct. Signed-off-by: Masami Hiramatsu Tested-by: Arnaldo Carvalho de Melo Cc: Jiri Olsa Cc: Namhyung Kim Cc: Peter Zijlstra Link: http://lkml.kernel.org/r/148411436777.9978.1440275861947194930.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo commit 1666d49e1d416fcc2cce708242a52fe3317ea8ba Author: Hangbin Liu Date: Thu Jan 12 21:19:37 2017 +0800 mld: do not remove mld souce list info when set link down This is an IPv6 version of commit 24803f38a5c0 ("igmp: do not remove igmp souce list..."). In mld_del_delrec(), we will restore back all source filter info instead of flush them. Move mld_clear_delrec() from ipv6_mc_down() to ipv6_mc_destroy_dev() since we should not remove source list info when set link down. Remove igmp6_group_dropped() in ipv6_mc_destroy_dev() since we have called it in ipv6_mc_down(). Also clear all source info after igmp6_group_dropped() instead of in it because ipv6_mc_down() will call igmp6_group_dropped(). Signed-off-by: Hangbin Liu Signed-off-by: David S. Miller commit 90f92c631b210c1e97080b53a9d863783281a932 Author: Linus Walleij Date: Tue Sep 13 12:31:17 2016 +0100 ARM: 8613/1: Fix the uaccess crash on PB11MPCore The following patch was sketched by Russell in response to my crashes on the PB11MPCore after the patch for software-based priviledged no access support for ARMv8.1. See this thread: http://marc.info/?l=linux-arm-kernel&m=144051749807214&w=2 I am unsure what is going on, I suspect everyone involved in the discussion is. I just want to repost this to get the discussion restarted, as I still have to apply this patch with every kernel iteration to get my PB11MPCore Realview running. Testing by Neil Armstrong on the Oxnas NAS has revealed that this bug exist also on that widely deployed hardware, so we are probably currently regressing all ARM11MPCore systems. Cc: Russell King Cc: Will Deacon Fixes: a5e090acbf54 ("ARM: software-based priviledged-no-access support") Tested-by: Neil Armstrong Signed-off-by: Linus Walleij Signed-off-by: Russell King commit 34393529163af7163ef8459808e3cf2af7db7f16 Author: Ivan Vecera Date: Fri Jan 13 22:38:29 2017 +0100 be2net: fix MAC addr setting on privileged BE3 VFs During interface opening MAC address stored in netdev->dev_addr is programmed in the HW with exception of BE3 VFs where the initial MAC is programmed by parent PF. This is OK when MAC address is not changed when an interfaces is down. In this case the requested MAC is stored to netdev->dev_addr and later is stored into HW during opening. But this is not done for all BE3 VFs so the NIC HW does not know anything about this change and all traffic is filtered. This is the case of bonding if fail_over_mac == 0 where the MACs of the slaves are changed while they are down. The be2net behavior is too restrictive because if a BE3 VF has the FILTMGMT privilege then it is able to modify its MAC without any restriction. To solve the described problem the driver should take care about these privileged BE3 VFs so the MAC is programmed during opening. And by contrast unpriviled BE3 VFs should not be allowed to change its MAC in any case. Cc: Sathya Perla Cc: Ajit Khaparde Cc: Sriharsha Basavapatna Cc: Somnath Kotur Signed-off-by: Ivan Vecera Signed-off-by: David S. Miller commit 6d928ae590c8d58cfd5cca997d54394de139cbb7 Author: Ivan Vecera Date: Fri Jan 13 22:38:28 2017 +0100 be2net: don't delete MAC on close on unprivileged BE3 VFs BE3 VFs without FILTMGMT privilege are not allowed to modify its MAC, VLAN table and UC/MC lists. So don't try to delete MAC on such VFs. Cc: Sathya Perla Cc: Ajit Khaparde Cc: Sriharsha Basavapatna Cc: Somnath Kotur Signed-off-by: Ivan Vecera Signed-off-by: David S. Miller commit fe68d8bfe59c561664aa87d827aa4b320eb08895 Author: Ivan Vecera Date: Fri Jan 13 22:38:27 2017 +0100 be2net: fix status check in be_cmd_pmac_add() Return value from be_mcc_notify_wait() contains a base completion status together with an additional status. The base_status() macro need to be used to access base status. Fixes: e3a7ae2 be2net: Changing MAC Address of a VF was broken Cc: Sathya Perla Cc: Ajit Khaparde Cc: Sriharsha Basavapatna Cc: Somnath Kotur Signed-off-by: Ivan Vecera Signed-off-by: David S. Miller commit d43e6fb4ac4abfe4ef7c102833ed02330ad701e0 Author: Arnd Bergmann Date: Mon Jan 16 14:20:54 2017 +0100 cpmac: remove hopeless #warning The #warning was present 10 years ago when the driver first got merged. As the platform is rather obsolete by now, it seems very unlikely that the warning will cause anyone to fix the code properly. kernelci.org reports the warning for every build in the meantime, so I think it's better to just turn it into a code comment to reduce noise. Signed-off-by: Arnd Bergmann Signed-off-by: David S. Miller commit 8ec3e8a192ba6f13be4522ee81227c792c86fb1a Author: Masaru Nagai Date: Mon Jan 16 11:45:21 2017 +0100 ravb: do not use zero-length alignment DMA descriptor Due to alignment requirements of the hardware transmissions are split into two DMA descriptors, a small padding descriptor of 0 - 3 bytes in length followed by a descriptor for rest of the packet. In the case of IP packets the first descriptor will never be zero due to the way that the stack aligns buffers for IP packets. However, for non-IP packets it may be zero. In that case it has been reported that timeouts occur, presumably because transmission stops at the first zero-length DMA descriptor and thus the packet is not transmitted. However, in my environment a BUG is triggered as follows: [ 20.381417] ------------[ cut here ]------------ [ 20.386054] kernel BUG at lib/swiotlb.c:495! [ 20.390324] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP [ 20.395805] Modules linked in: [ 20.398862] CPU: 0 PID: 2089 Comm: mz Not tainted 4.10.0-rc3-00001-gf13ad2db193f #162 [ 20.406689] Hardware name: Renesas Salvator-X board based on r8a7796 (DT) [ 20.413474] task: ffff80063b1f1900 task.stack: ffff80063a71c000 [ 20.419404] PC is at swiotlb_tbl_map_single+0x178/0x2ec [ 20.424625] LR is at map_single+0x4c/0x98 [ 20.428629] pc : [] lr : [] pstate: 800001c5 [ 20.436019] sp : ffff80063a71f9b0 [ 20.439327] x29: ffff80063a71f9b0 x28: ffff80063a20d500 [ 20.444636] x27: ffff000008ed5000 x26: 0000000000000000 [ 20.449944] x25: 000000067abe2adc x24: 0000000000000000 [ 20.455252] x23: 0000000000200000 x22: 0000000000000001 [ 20.460559] x21: 0000000000175ffe x20: ffff80063b2a0010 [ 20.465866] x19: 0000000000000000 x18: 0000ffffcae6fb20 [ 20.471173] x17: 0000ffffa09ba018 x16: ffff0000087c8b70 [ 20.476480] x15: 0000ffffa084f588 x14: 0000ffffa09cfa14 [ 20.481787] x13: 0000ffffcae87ff0 x12: 000000000063abe2 [ 20.487098] x11: ffff000008096360 x10: ffff80063abe2adc [ 20.492407] x9 : 0000000000000000 x8 : 0000000000000000 [ 20.497718] x7 : 0000000000000000 x6 : ffff000008ed50d0 [ 20.503028] x5 : 0000000000000000 x4 : 0000000000000001 [ 20.508338] x3 : 0000000000000000 x2 : 000000067abe2adc [ 20.513648] x1 : 00000000bafff000 x0 : 0000000000000000 [ 20.518958] [ 20.520446] Process mz (pid: 2089, stack limit = 0xffff80063a71c000) [ 20.526798] Stack: (0xffff80063a71f9b0 to 0xffff80063a720000) [ 20.532543] f9a0: ffff80063a71fa30 ffff00000839c680 [ 20.540374] f9c0: ffff80063b2a0010 ffff80063b2a0010 0000000000000001 0000000000000000 [ 20.548204] f9e0: 000000000000006e ffff80063b23c000 ffff80063b23c000 0000000000000000 [ 20.556034] fa00: ffff80063b23c000 ffff80063a20d500 000000013b1f1900 0000000000000000 [ 20.563864] fa20: ffff80063ffd18e0 ffff80063b2a0010 ffff80063a71fa60 ffff00000839cd10 [ 20.571694] fa40: ffff80063b2a0010 0000000000000000 ffff80063ffd18e0 000000067abe2adc [ 20.579524] fa60: ffff80063a71fa90 ffff000008096380 ffff80063b2a0010 0000000000000000 [ 20.587353] fa80: 0000000000000000 0000000000000001 ffff80063a71fac0 ffff00000864f770 [ 20.595184] faa0: ffff80063b23caf0 0000000000000000 0000000000000000 0000000000000140 [ 20.603014] fac0: ffff80063a71fb60 ffff0000087e6498 ffff80063a20d500 ffff80063b23c000 [ 20.610843] fae0: 0000000000000000 ffff000008daeaf0 0000000000000000 ffff000008daeb00 [ 20.618673] fb00: ffff80063a71fc0c ffff000008da7000 ffff80063b23c090 ffff80063a44f000 [ 20.626503] fb20: 0000000000000000 ffff000008daeb00 ffff80063a71fc0c ffff000008da7000 [ 20.634333] fb40: ffff80063b23c090 0000000000000000 ffff800600000037 ffff0000087e63d8 [ 20.642163] fb60: ffff80063a71fbc0 ffff000008807510 ffff80063a692400 ffff80063a20d500 [ 20.649993] fb80: ffff80063a44f000 ffff80063b23c000 ffff80063a69249c 0000000000000000 [ 20.657823] fba0: 0000000000000000 ffff80063a087800 ffff80063b23c000 ffff80063a20d500 [ 20.665653] fbc0: ffff80063a71fc10 ffff0000087e67dc ffff80063a20d500 ffff80063a692400 [ 20.673483] fbe0: ffff80063b23c000 0000000000000000 ffff80063a44f000 ffff80063a69249c [ 20.681312] fc00: ffff80063a5f1a10 000000103a087800 ffff80063a71fc70 ffff0000087e6b24 [ 20.689142] fc20: ffff80063a5f1a80 ffff80063a71fde8 000000000000000f 00000000000005ea [ 20.696972] fc40: ffff80063a5f1a10 0000000000000000 000000000000000f ffff00000887fbd0 [ 20.704802] fc60: fffffff43a5f1a80 0000000000000000 ffff80063a71fc80 ffff000008880240 [ 20.712632] fc80: ffff80063a71fd90 ffff0000087c7a34 ffff80063afc7180 0000000000000000 [ 20.720462] fca0: 0000ffffcae6fe18 0000000000000014 0000000060000000 0000000000000015 [ 20.728292] fcc0: 0000000000000123 00000000000000ce ffff0000088d2000 ffff80063b1f1900 [ 20.736122] fce0: 0000000000008933 ffff000008e7cb80 ffff80063a71fd80 ffff0000087c50a4 [ 20.743951] fd00: 0000000000008933 ffff000008e7cb80 ffff000008e7cb80 000000100000000e [ 20.751781] fd20: ffff80063a71fe4c 0000ffff00000300 0000000000000123 0000000000000000 [ 20.759611] fd40: 0000000000000000 ffff80063b1f0000 000000000000000e 0000000000000300 [ 20.767441] fd60: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 [ 20.775271] fd80: 0000000000000000 0000000000000000 ffff80063a71fda0 ffff0000087c8c20 [ 20.783100] fda0: 0000000000000000 ffff000008082f30 0000000000000000 0000800637260000 [ 20.790930] fdc0: ffffffffffffffff 0000ffffa0903078 0000000000000000 000000001ea87232 [ 20.798760] fde0: 000000000000000f ffff80063a71fe40 ffff800600000014 ffff000000000001 [ 20.806590] fe00: 0000000000000000 0000000000000000 ffff80063a71fde8 0000000000000000 [ 20.814420] fe20: 0000000000000000 0000000000000000 0000000000000000 0000000000000001 [ 20.822249] fe40: 0000000203000011 0000000000000000 0000000000000000 ffff80063a68aa00 [ 20.830079] fe60: ffff80063a68aa00 0000000000000003 0000000000008933 ffff0000081f1b9c [ 20.837909] fe80: 0000000000000000 ffff000008082f30 0000000000000000 0000800637260000 [ 20.845739] fea0: ffffffffffffffff 0000ffffa07ca81c 0000000060000000 0000000000000015 [ 20.853569] fec0: 0000000000000003 000000001ea87232 000000000000000f 0000000000000000 [ 20.861399] fee0: 0000ffffcae6fe18 0000000000000014 0000000000000300 0000000000000000 [ 20.869228] ff00: 00000000000000ce 0000000000000000 00000000ffffffff 0000000000000000 [ 20.877059] ff20: 0000000000000002 0000ffffcae87ff0 0000ffffa09cfa14 0000ffffa084f588 [ 20.884888] ff40: 0000000000000000 0000ffffa09ba018 0000ffffcae6fb20 000000001ea87010 [ 20.892718] ff60: 0000ffffa09b9000 0000ffffcae6fe30 0000ffffcae6fe18 000000000000000f [ 20.900548] ff80: 0000000000000003 000000001ea87232 0000000000000000 0000000000000000 [ 20.908378] ffa0: 0000000000000000 0000ffffcae6fdc0 0000ffffa09a7824 0000ffffcae6fdc0 [ 20.916208] ffc0: 0000ffffa0903078 0000000060000000 0000000000000003 00000000000000ce [ 20.924038] ffe0: 0000000000000000 0000000000000000 ffffffffffffffff ffffffffffffffff [ 20.931867] Call trace: [ 20.934312] Exception stack(0xffff80063a71f7e0 to 0xffff80063a71f910) [ 20.940750] f7e0: 0000000000000000 0001000000000000 ffff80063a71f9b0 ffff00000839c4c0 [ 20.948580] f800: ffff80063a71f840 ffff00000888a6e4 ffff80063a24c418 ffff80063a24c448 [ 20.956410] f820: 0000000000000000 ffff00000811cd54 ffff80063a71f860 ffff80063a24c458 [ 20.964240] f840: ffff80063a71f870 ffff00000888b258 ffff80063a24c418 0000000000000001 [ 20.972070] f860: ffff80063a71f910 ffff80063a7b7028 ffff80063a71f890 ffff0000088825e4 [ 20.979899] f880: 0000000000000000 00000000bafff000 000000067abe2adc 0000000000000000 [ 20.987729] f8a0: 0000000000000001 0000000000000000 ffff000008ed50d0 0000000000000000 [ 20.995560] f8c0: 0000000000000000 0000000000000000 ffff80063abe2adc ffff000008096360 [ 21.003390] f8e0: 000000000063abe2 0000ffffcae87ff0 0000ffffa09cfa14 0000ffffa084f588 [ 21.011219] f900: ffff0000087c8b70 0000ffffa09ba018 [ 21.016097] [] swiotlb_tbl_map_single+0x178/0x2ec [ 21.022362] [] map_single+0x4c/0x98 [ 21.027411] [] swiotlb_map_page+0xa4/0x138 [ 21.033072] [] __swiotlb_map_page+0x20/0x7c [ 21.038821] [] ravb_start_xmit+0x174/0x668 [ 21.044484] [] dev_hard_start_xmit+0x8c/0x120 [ 21.050407] [] sch_direct_xmit+0x108/0x1a0 [ 21.056064] [] __dev_queue_xmit+0x194/0x4cc [ 21.061807] [] dev_queue_xmit+0x10/0x18 [ 21.067214] [] packet_sendmsg+0xf40/0x1220 [ 21.072873] [] sock_sendmsg+0x18/0x2c [ 21.078097] [] SyS_sendto+0xb0/0xf0 [ 21.083150] [] el0_svc_naked+0x24/0x28 [ 21.088462] Code: d34bfef7 2a1803f3 1a9f86d6 35fff878 (d4210000) [ 21.094611] ---[ end trace 5bc544ad491f3814 ]--- [ 21.099234] Kernel panic - not syncing: Fatal exception in interrupt [ 21.105587] Kernel Offset: disabled [ 21.109073] Memory Limit: none [ 21.112126] ---[ end Kernel panic - not syncing: Fatal exception in interrupt Fixes: 2f45d1902acf ("ravb: minimize TX data copying") Signed-off-by: Masaru Nagai Acked-by: Sergei Shtylyov Signed-off-by: David S. Miller commit 0d7f4f0594fc38531e37b94a73ea3ebcc9d9bc11 Author: Russell King Date: Tue Nov 1 20:27:13 2016 +0000 MAINTAINERS: update rmk's entries Update my entries in the MAINTAINERS file with the same email address for kernel work, and, now that the git tree is hosted on more suitable hardware, add git tree references where appropriate. Signed-off-by: Russell King commit 8cf699ec849f4ca1413cea01289bd7d37dbcc626 Author: Eric Dumazet Date: Fri Jan 13 08:39:24 2017 -0800 mlx4: do not call napi_schedule() without care Disable BH around the call to napi_schedule() to avoid following warning [ 52.095499] NOHZ: local_softirq_pending 08 [ 52.421291] NOHZ: local_softirq_pending 08 [ 52.608313] NOHZ: local_softirq_pending 08 Fixes: 8d59de8f7bb3 ("net/mlx4_en: Process all completions in RX rings after port goes up") Signed-off-by: Eric Dumazet Cc: Erez Shitrit Cc: Eugenia Emantayev Cc: Tariq Toukan Acked-by: Tariq Toukan Signed-off-by: David S. Miller commit ee6ff743e3a4b697e8286054667d7e4e1b56510d Author: Ulf Hansson Date: Fri Jan 13 12:05:03 2017 +0100 mmc: core: Restore parts of the polling policy when switch to HS/HS DDR Regressions for not being able to detect an eMMC HS DDR mode card has been reported for the sdhci-esdhc-imx driver, but potentially other sdhci variants may suffer from the similar problem. The commit e173f8911f09 ("mmc: core: Update CMD13 polling policy when switch to HS DDR mode"), is causing the problem. It seems that change moved one step to far, regarding changing the host's timing before polling for a busy card. To fix this, let's move back to the behaviour when the host's timing is updated after the polling, but before the switch status is fetched and validated. In cases when polling with CMD13, we keep validating the switch status at each attempt. However, to align with the other card busy detections mechanism, let's fetch and validate the switch status also after the host's timing is updated. Reported-by: Clemens Gruber Reported-by: Gary Bisson Fixes: e173f8911f09 ("mmc: core: Update CMD13 polling policy when switch..") Cc: Shawn Lin Cc: Dong Aisheng Cc: Haibo Chen Signed-off-by: Ulf Hansson Tested-by: Clemens Gruber Tested-by: Jagan Teki Reviewed-by: Shawn Lin Tested-by: Haibo Chen Reviewed-by: Dong Aisheng commit 4205e4786d0b9fc3b4fec7b1910cf645a0468307 Author: Thomas Gleixner Date: Tue Jan 10 14:01:05 2017 +0100 cpu/hotplug: Provide dynamic range for prepare stage Mathieu reported that the LTTNG modules are broken as of 4.10-rc1 due to the removal of the cpu hotplug notifiers. Usually I don't care much about out of tree modules, but LTTNG is widely used in distros. There are two ways to solve that: 1) Reserve a hotplug state for LTTNG 2) Add a dynamic range for the prepare states. While #1 is the simplest solution, #2 is the proper one as we can convert in tree users, which do not care about ordering, to the dynamic range as well. Add a dynamic range which allows LTTNG to request states in the prepare stage. Reported-and-tested-by: Mathieu Desnoyers Signed-off-by: Thomas Gleixner Reviewed-by: Mathieu Desnoyers Cc: Peter Zijlstra Cc: Sebastian Sewior Cc: Steven Rostedt Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1701101353010.3401@nanos Signed-off-by: Thomas Gleixner commit 75f01a4c9cc291ff5cb28ca1216adb163b7a20ee Author: Lance Richardson Date: Thu Jan 12 19:33:18 2017 -0500 openvswitch: maintain correct checksum state in conntrack actions When executing conntrack actions on skbuffs with checksum mode CHECKSUM_COMPLETE, the checksum must be updated to account for header pushes and pulls. Otherwise we get "hw csum failure" logs similar to this (ICMP packet received on geneve tunnel via ixgbe NIC): [ 405.740065] genev_sys_6081: hw csum failure [ 405.740106] CPU: 3 PID: 0 Comm: swapper/3 Tainted: G I 4.10.0-rc3+ #1 [ 405.740108] Call Trace: [ 405.740110] [ 405.740113] dump_stack+0x63/0x87 [ 405.740116] netdev_rx_csum_fault+0x3a/0x40 [ 405.740118] __skb_checksum_complete+0xcf/0xe0 [ 405.740120] nf_ip_checksum+0xc8/0xf0 [ 405.740124] icmp_error+0x1de/0x351 [nf_conntrack_ipv4] [ 405.740132] nf_conntrack_in+0xe1/0x550 [nf_conntrack] [ 405.740137] ? find_bucket.isra.2+0x62/0x70 [openvswitch] [ 405.740143] __ovs_ct_lookup+0x95/0x980 [openvswitch] [ 405.740145] ? netif_rx_internal+0x44/0x110 [ 405.740149] ovs_ct_execute+0x147/0x4b0 [openvswitch] [ 405.740153] do_execute_actions+0x22e/0xa70 [openvswitch] [ 405.740157] ovs_execute_actions+0x40/0x120 [openvswitch] [ 405.740161] ovs_dp_process_packet+0x84/0x120 [openvswitch] [ 405.740166] ovs_vport_receive+0x73/0xd0 [openvswitch] [ 405.740168] ? udp_rcv+0x1a/0x20 [ 405.740170] ? ip_local_deliver_finish+0x93/0x1e0 [ 405.740172] ? ip_local_deliver+0x6f/0xe0 [ 405.740174] ? ip_rcv_finish+0x3a0/0x3a0 [ 405.740176] ? ip_rcv_finish+0xdb/0x3a0 [ 405.740177] ? ip_rcv+0x2a7/0x400 [ 405.740180] ? __netif_receive_skb_core+0x970/0xa00 [ 405.740185] netdev_frame_hook+0xd3/0x160 [openvswitch] [ 405.740187] __netif_receive_skb_core+0x1dc/0xa00 [ 405.740194] ? ixgbe_clean_rx_irq+0x46d/0xa20 [ixgbe] [ 405.740197] __netif_receive_skb+0x18/0x60 [ 405.740199] netif_receive_skb_internal+0x40/0xb0 [ 405.740201] napi_gro_receive+0xcd/0x120 [ 405.740204] gro_cell_poll+0x57/0x80 [geneve] [ 405.740206] net_rx_action+0x260/0x3c0 [ 405.740209] __do_softirq+0xc9/0x28c [ 405.740211] irq_exit+0xd9/0xf0 [ 405.740213] do_IRQ+0x51/0xd0 [ 405.740215] common_interrupt+0x93/0x93 Fixes: 7f8a436eaa2c ("openvswitch: Add conntrack action") Signed-off-by: Lance Richardson Acked-by: Pravin B Shelar Signed-off-by: David S. Miller commit 602d9858f07c72eab64f5f00e2fae55f9902cfbe Author: Nikita Yushchenko Date: Wed Jan 11 21:56:31 2017 +0300 swiotlb: ensure that page-sized mappings are page-aligned Some drivers do depend on page mappings to be page aligned. Swiotlb already enforces such alignment for mappings greater than page, extend that to page-sized mappings as well. Without this fix, nvme hits BUG() in nvme_setup_prps(), because that routine assumes page-aligned mappings. Signed-off-by: Nikita Yushchenko Reviewed-by: Christoph Hellwig Reviewed-by: Sagi Grimberg Signed-off-by: Konrad Rzeszutek Wilk commit 52d7e48b86fc108e45a656d8e53e4237993c481d Author: Paul E. McKenney Date: Tue Jan 10 02:28:26 2017 -0800 rcu: Narrow early boot window of illegal synchronous grace periods The current preemptible RCU implementation goes through three phases during bootup. In the first phase, there is only one CPU that is running with preemption disabled, so that a no-op is a synchronous grace period. In the second mid-boot phase, the scheduler is running, but RCU has not yet gotten its kthreads spawned (and, for expedited grace periods, workqueues are not yet running. During this time, any attempt to do a synchronous grace period will hang the system (or complain bitterly, depending). In the third and final phase, RCU is fully operational and everything works normally. This has been OK for some time, but there has recently been some synchronous grace periods showing up during the second mid-boot phase. This code worked "by accident" for awhile, but started failing as soon as expedited RCU grace periods switched over to workqueues in commit 8b355e3bc140 ("rcu: Drive expedited grace periods from workqueue"). Note that the code was buggy even before this commit, as it was subject to failure on real-time systems that forced all expedited grace periods to run as normal grace periods (for example, using the rcu_normal ksysfs parameter). The callchain from the failure case is as follows: early_amd_iommu_init() |-> acpi_put_table(ivrs_base); |-> acpi_tb_put_table(table_desc); |-> acpi_tb_invalidate_table(table_desc); |-> acpi_tb_release_table(...) |-> acpi_os_unmap_memory |-> acpi_os_unmap_iomem |-> acpi_os_map_cleanup |-> synchronize_rcu_expedited The kernel showing this callchain was built with CONFIG_PREEMPT_RCU=y, which caused the code to try using workqueues before they were initialized, which did not go well. This commit therefore reworks RCU to permit synchronous grace periods to proceed during this mid-boot phase. This commit is therefore a fix to a regression introduced in v4.9, and is therefore being put forward post-merge-window in v4.10. This commit sets a flag from the existing rcu_scheduler_starting() function which causes all synchronous grace periods to take the expedited path. The expedited path now checks this flag, using the requesting task to drive the expedited grace period forward during the mid-boot phase. Finally, this flag is updated by a core_initcall() function named rcu_exp_runtime_mode(), which causes the runtime codepaths to be used. Note that this arrangement assumes that tasks are not sent POSIX signals (or anything similar) from the time that the first task is spawned through core_initcall() time. Fixes: 8b355e3bc140 ("rcu: Drive expedited grace periods from workqueue") Reported-by: "Zheng, Lv" Reported-by: Borislav Petkov Signed-off-by: Paul E. McKenney Tested-by: Stan Kain Tested-by: Ivan Tested-by: Emanuel Castelo Tested-by: Bruno Pesavento Tested-by: Borislav Petkov Tested-by: Frederic Bezies Cc: # 4.9.0- commit f466ae66fa6a599f9a53b5f9bafea4b8cfffa7fb Author: Paul E. McKenney Date: Mon Jan 9 23:23:15 2017 -0800 rcu: Remove cond_resched() from Tiny synchronize_sched() It is now legal to invoke synchronize_sched() at early boot, which causes Tiny RCU's synchronize_sched() to emit spurious splats. This commit therefore removes the cond_resched() from Tiny RCU's synchronize_sched(). Fixes: 8b355e3bc140 ("rcu: Drive expedited grace periods from workqueue") Signed-off-by: Paul E. McKenney Cc: # 4.9.0- commit c6180a6237174f481dc856ed6e890d8196b6f0fb Author: Trond Myklebust Date: Fri Jan 13 13:31:32 2017 -0500 NFSv4: Fix client recovery when server reboots multiple times If the server reboots multiple times, the client should rely on the server to tell it that it cannot reclaim state as per section 9.6.3.4 in RFC7530 and section 8.4.2.1 in RFC5661. Currently, the client is being to conservative, and is assuming that if the server reboots while state recovery is in progress, then it must ignore state that was not recovered before the reboot. Signed-off-by: Trond Myklebust commit 003c941057eaa868ca6fedd29a274c863167230d Author: Shannon Nelson Date: Thu Jan 12 14:24:58 2017 -0800 tcp: fix tcp_fastopen unaligned access complaints on sparc Fix up a data alignment issue on sparc by swapping the order of the cookie byte array field with the length field in struct tcp_fastopen_cookie, and making it a proper union to clean up the typecasting. This addresses log complaints like these: log_unaligned: 113 callbacks suppressed Kernel unaligned access at TPC[976490] tcp_try_fastopen+0x2d0/0x360 Kernel unaligned access at TPC[9764ac] tcp_try_fastopen+0x2ec/0x360 Kernel unaligned access at TPC[9764c8] tcp_try_fastopen+0x308/0x360 Kernel unaligned access at TPC[9764e4] tcp_try_fastopen+0x324/0x360 Kernel unaligned access at TPC[976490] tcp_try_fastopen+0x2d0/0x360 Cc: Eric Dumazet Signed-off-by: Shannon Nelson Acked-by: Eric Dumazet Signed-off-by: David S. Miller commit fa79581ea66ca43d56ef065346ac5be767fcb418 Author: David Lebrun Date: Thu Jan 12 21:30:01 2017 +0100 ipv6: sr: fix several BUGs when preemption is enabled When CONFIG_PREEMPT=y, CONFIG_IPV6=m and CONFIG_SEG6_HMAC=y, seg6_hmac_init() is called during the initialization of the ipv6 module. This causes a subsequent call to smp_processor_id() with preemption enabled, resulting in the following trace. [ 20.451460] BUG: using smp_processor_id() in preemptible [00000000] code: systemd/1 [ 20.452556] caller is debug_smp_processor_id+0x17/0x19 [ 20.453304] CPU: 0 PID: 1 Comm: systemd Not tainted 4.9.0-rc5-00973-g46738b1 #1 [ 20.454406] ffffc9000062fc18 ffffffff813607b2 0000000000000000 ffffffff81a7f782 [ 20.455528] ffffc9000062fc48 ffffffff813778dc 0000000000000000 00000000001dcf98 [ 20.456539] ffffffffa003bd08 ffffffff81af93e0 ffffc9000062fc58 ffffffff81377905 [ 20.456539] Call Trace: [ 20.456539] [] dump_stack+0x63/0x7f [ 20.456539] [] check_preemption_disabled+0xd1/0xe3 [ 20.456539] [] debug_smp_processor_id+0x17/0x19 [ 20.460260] [] seg6_hmac_init+0xfa/0x192 [ipv6] [ 20.460260] [] seg6_init+0x39/0x6f [ipv6] [ 20.460260] [] inet6_init+0x21a/0x321 [ipv6] [ 20.460260] [] ? 0xffffffffa0061000 [ 20.460260] [] do_one_initcall+0x8b/0x115 [ 20.460260] [] do_init_module+0x53/0x1c4 [ 20.460260] [] load_module+0x1153/0x14ec [ 20.460260] [] SYSC_finit_module+0x8c/0xb9 [ 20.460260] [] ? SYSC_finit_module+0x8c/0xb9 [ 20.460260] [] SyS_finit_module+0x9/0xb [ 20.460260] [] do_syscall_64+0x62/0x75 [ 20.460260] [] entry_SYSCALL64_slow_path+0x25/0x25 Moreover, dst_cache_* functions also call smp_processor_id(), generating a similar trace. This patch uses raw_cpu_ptr() in seg6_hmac_init() rather than this_cpu_ptr() and disable preemption when using dst_cache_* functions. Signed-off-by: David Lebrun Signed-off-by: David S. Miller commit 148d3d021cf9724fcf189ce4e525a094bbf5ce89 Author: Florian Fainelli Date: Thu Jan 12 12:09:09 2017 -0800 net: systemport: Decouple flow control from __bcm_sysport_tx_reclaim The __bcm_sysport_tx_reclaim() function is used to reclaim transmit resources in different places within the driver. Most of them should not affect the state of the transit flow control. Introduce bcm_sysport_tx_clean() which cleans the ring, but does not re-enable flow control towards the networking stack, and make bcm_sysport_tx_reclaim() do the actual transmit queue flow control. Fixes: 80105befdb4b ("net: systemport: add Broadcom SYSTEMPORT Ethernet MAC driver") Signed-off-by: Florian Fainelli Signed-off-by: David S. Miller commit ed79c9d34f4f4c5842b66cab840315e7ac29f666 Author: Nicolas Dichtel Date: Fri Jan 13 11:46:39 2017 +0100 ARM: put types.h in uapi Due to the way kbuild works, this header was unintentionally exported back in 2013 when it was created, despite it not being in a uapi/ directory. This is very non-intuitive behaviour by Kbuild. However, we've had this include exported to userland for almost four years, and searching google for "ARM types.h __UINTPTR_TYPE__" gives no hint that anyone has complained about it. So, let's make it officially exported in this state. Signed-off-by: Nicolas Dichtel Signed-off-by: Russell King commit dbef53621116474bb883f76f0ba6b7640bc42332 Author: Michal Kazior Date: Fri Jan 13 13:32:51 2017 +0100 mac80211: prevent skb/txq mismatch Station structure is considered as not uploaded (to driver) until drv_sta_state() finishes. This call is however done after the structure is attached to mac80211 internal lists and hashes. This means mac80211 can lookup (and use) station structure before it is uploaded to a driver. If this happens (structure exists, but sta->uploaded is false) fast_tx path can still be taken. Deep in the fastpath call the sta->uploaded is checked against to derive "pubsta" argument for ieee80211_get_txq(). If sta->uploaded is false (and sta is actually non-NULL) ieee80211_get_txq() effectively downgraded to vif->txq. At first glance this may look innocent but coerces mac80211 into a state that is almost guaranteed (codel may drop offending skb) to crash because a station-oriented skb gets queued up on vif-oriented txq. The ieee80211_tx_dequeue() ends up looking at info->control.flags and tries to use txq->sta which in the fail case is NULL. It's probably pointless to pretend one can downgrade skb from sta-txq to vif-txq. Since downgrading unicast traffic to vif->txq must not be done there's no txq to put a frame on if sta->uploaded is false. Therefore the code is made to fall back to regular tx() op path if the described condition is hit. Only drivers using wake_tx_queue were affected. Example crash dump before fix: Unable to handle kernel paging request at virtual address ffffe26c PC is at ieee80211_tx_dequeue+0x204/0x690 [mac80211] [] (ieee80211_tx_dequeue [mac80211]) from [] (ath10k_mac_tx_push_txq+0x54/0x1c0 [ath10k_core]) [] (ath10k_mac_tx_push_txq [ath10k_core]) from [] (ath10k_htt_txrx_compl_task+0xd78/0x11d0 [ath10k_core]) [] (ath10k_htt_txrx_compl_task [ath10k_core]) [] (ath10k_pci_napi_poll+0x54/0xe8 [ath10k_pci]) [] (ath10k_pci_napi_poll [ath10k_pci]) from [] (net_rx_action+0xac/0x160) Reported-by: Mohammed Shafi Shajakhan Signed-off-by: Michal Kazior Signed-off-by: Johannes Berg commit 43071d8fb3b7f589d72663c496a6880fb097533c Author: Felix Fietkau Date: Fri Jan 13 11:28:25 2017 +0100 mac80211: initialize SMPS field in HT capabilities ibss and mesh modes copy the ht capabilites from the band without overriding the SMPS state. Unfortunately the default value 0 for the SMPS field means static SMPS instead of disabled. This results in HT ibss and mesh setups using only single-stream rates, even though SMPS is not supposed to be active. Initialize SMPS to disabled for all bands on ieee80211_hw_register to ensure that the value is sane where it is not overriden with the real SMPS state. Reported-by: Elektra Wagenrad Signed-off-by: Felix Fietkau [move VHT TODO comment to a better place] Signed-off-by: Johannes Berg commit 7aa4865506a26c607e00bd9794a85785b55ebca7 Author: Vadim Lomovtsev Date: Thu Jan 12 07:28:06 2017 -0800 net: thunderx: acpi: fix LMAC initialization While probing BGX we requesting appropriate QLM for it's configuration and get LMAC count by that request. Then, while reading configured MAC values from SSDT table we need to save them in proper mapping: BGX[i]->lmac[j].mac = to later provide for initialization stuff. In order to fill such mapping properly we need to add lmac index to be used while acpi initialization since at this moment bgx->lmac_count already contains actual value. Signed-off-by: Vadim Lomovtsev Signed-off-by: David S. Miller commit ce1ca7d2d140a1f4aaffd297ac487f246963dd2f Author: Sriharsha Basavapatna Date: Mon Jan 9 16:00:44 2017 +0530 svcrdma: avoid duplicate dma unmapping during error recovery In rdma_read_chunk_frmr() when ib_post_send() fails, the error code path invokes ib_dma_unmap_sg() to unmap the sg list. It then invokes svc_rdma_put_frmr() which in turn tries to unmap the same sg list through ib_dma_unmap_sg() again. This second unmap is invalid and could lead to problems when the iova being unmapped is subsequently reused. Remove the call to unmap in rdma_read_chunk_frmr() and let svc_rdma_put_frmr() handle it. Fixes: 412a15c0fe53 ("svcrdma: Port to new memory registration API") Cc: stable@vger.kernel.org Signed-off-by: Sriharsha Basavapatna Reviewed-by: Chuck Lever Reviewed-by: Yuval Shaia Signed-off-by: J. Bruce Fields commit 8e38b7d4d71479b23b77f01cf0e5071610b8f357 Author: Stefan Schmidt Date: Mon Jan 2 16:58:13 2017 +0100 ieee802154: atusb: fix driver to work with older firmware versions After the addition of the frame_retries callback we could run into cases where a ATUSB device with an older firmware version would now longer be able to bring the interface up. We keep this functionality disabled now if the minimum firmware version for this feature is not available. Fixes: 5d82288b93db3bc ("ieee802154: atusb: implement .set_frame_retries ops callback") Reported-by: Alexander Aring Acked-by: Alexander Aring Signed-off-by: Stefan Schmidt Signed-off-by: Marcel Holtmann commit f301606934b240fb54d8edf3618a0483e36046fc Author: Andrey Smirnov Date: Sun Dec 18 15:25:33 2016 -0800 at86rf230: Allow slow GPIO pins for "rstn" Driver code never touches "rstn" signal in atomic context, so there's no need to implicitly put such restriction on it by using gpio_set_value to manipulate it. Replace gpio_set_value to gpio_set_value_cansleep to fix that. As a an example of where such restriction might be inconvenient, consider a hardware design where "rstn" is connected to a pin of I2C/SPI GPIO expander chip. Cc: Chris Healy Signed-off-by: Andrey Smirnov Signed-off-by: Stefan Schmidt Signed-off-by: Marcel Holtmann commit 5eb35a6ccea61648a55713c076ab65423eea1ac0 Author: Stefan Schmidt Date: Thu Dec 15 18:40:16 2016 +0100 ieee802154: atusb: do not use the stack for address fetching to make it DMA able From 4.9 we should really avoid using the stack here as this will not be DMA able on various platforms. This changes a buffer that was introduced in the 4.10 merge window. Fixes: 6cc33eba232c ("ieee802154: atusb: try to read permanent extended address from device") Reported-by: Dan Carpenter Signed-off-by: Stefan Schmidt Signed-off-by: Marcel Holtmann commit 2fd2b550a5ed13b1d6640ff77630fc369636a544 Author: Stefan Schmidt Date: Thu Dec 15 18:40:15 2016 +0100 ieee802154: atusb: make sure we set a randaom extended address if fetching fails In the unlikely case were the firmware is new enough but the actual USB command still fails make sure we set a random address and return. Signed-off-by: Stefan Schmidt Signed-off-by: Marcel Holtmann commit 05a974efa4bdf6e2a150e3f27dc6fcf0a9ad5655 Author: Stefan Schmidt Date: Thu Dec 15 18:40:14 2016 +0100 ieee802154: atusb: do not use the stack for buffers to make them DMA able From 4.9 we should really avoid using the stack here as this will not be DMA able on various platforms. This changes the buffers already being present in time of 4.9 being released. This should go into stable as well. Reported-by: Dan Carpenter Cc: stable@vger.kernel.org Signed-off-by: Stefan Schmidt Signed-off-by: Marcel Holtmann commit 546125d1614264d26080817d0c8cddb9b25081fa Author: Scott Mayhew Date: Thu Jan 5 16:34:51 2017 -0500 sunrpc: don't call sleeping functions from the notifier block callbacks The inet6addr_chain is an atomic notifier chain, so we can't call anything that might sleep (like lock_sock)... instead of closing the socket from svc_age_temp_xprts_now (which is called by the notifier function), just have the rpc service threads do it instead. Cc: stable@vger.kernel.org Fixes: c3d4879e01be "sunrpc: Add a function to close..." Signed-off-by: Scott Mayhew Signed-off-by: J. Bruce Fields commit 78794d1890708cf94e3961261e52dcec2cc34722 Author: J. Bruce Fields Date: Mon Jan 9 17:15:18 2017 -0500 svcrpc: don't leak contexts on PROC_DESTROY Context expiry times are in units of seconds since boot, not unix time. The use of get_seconds() here therefore sets the expiry time decades in the future. This prevents timely freeing of contexts destroyed by client RPC_GSS_PROC_DESTROY requests. We'd still free them eventually (when the module is unloaded or the container shut down), but a lot of contexts could pile up before then. Cc: stable@vger.kernel.org Fixes: c5b29f885afe "sunrpc: use seconds since boot in expiry cache" Reported-by: Andy Adamson Signed-off-by: J. Bruce Fields commit dcd208697707b12adeaa45643bab239c5e90ef9b Author: J. Bruce Fields Date: Wed Jan 11 20:34:50 2017 -0500 nfsd: fix supported attributes for acl & labels Oops--in 916d2d844afd I moved some constants into an array for convenience, but here I'm accidentally writing to that array. The effect is that if you ever encounter a filesystem lacking support for ACLs or security labels, then all queries of supported attributes will report that attribute as unsupported from then on. Fixes: 916d2d844afd "nfsd: clean up supported attribute handling" Signed-off-by: J. Bruce Fields commit d3129ef672cac81c4d0185336af377c8dc1091d3 Author: Trond Myklebust Date: Wed Jan 11 22:07:28 2017 -0500 NFSv4: update_changeattr should update the attribute timestamp Otherwise, the attribute cache remains marked as being expired. Signed-off-by: Trond Myklebust commit c40d52fe1c2ba25dcb8cd207c8d26ef5f57f0476 Author: Trond Myklebust Date: Wed Jan 11 12:36:11 2017 -0500 NFSv4: Don't call update_changeattr() unless the unlink is successful If the unlink wasn't successful, then the directory has presumably not changed. Signed-off-by: Trond Myklebust commit c733c49c32624f927f443be6dbabb387006bbe42 Author: Trond Myklebust Date: Wed Jan 11 12:32:26 2017 -0500 NFSv4: Don't apply change_info4 twice on rename within a directory If a file is renamed, but stays in the same directory, we will still receive 2 change_info4 structures describing the change to that directory, but we only want to apply it once. Signed-off-by: Trond Myklebust commit 2dfc61736482441993bfb7dfaa971113b53f107c Author: Trond Myklebust Date: Wed Jan 11 08:47:00 2017 -0500 NFSv4: Call update_changeattr() from _nfs4_proc_open only if a file was created We don't want to invalidate the directory attribute and data cache unless we know that a file was created, or the change attribute differs from the one in our cache. Signed-off-by: Trond Myklebust commit 8a430ed50bb1b19ca14a46661f3b1b35f2fb5c39 Author: David Ahern Date: Wed Jan 11 15:42:17 2017 -0800 net: ipv4: fix table id in getroute response rtm_table is an 8-bit field while table ids are allowed up to u32. Commit 709772e6e065 ("net: Fix routing tables with id > 255 for legacy software") added the preference to set rtm_table in dumps to RT_TABLE_COMPAT if the table id is > 255. The table id returned on get route requests should do the same. Fixes: c36ba6603a11 ("net: Allow user to get table id from route lookup") Signed-off-by: David Ahern Signed-off-by: David S. Miller commit 994c5483e7f6dbf9fea622ba2031b9d868feb4b9 Author: Timur Tabi Date: Wed Jan 11 16:45:51 2017 -0600 net: qcom/emac: grab a reference to the phydev on ACPI systems Commit 6ffe1c4cd0a7 ("net: qcom/emac: fix of_node and phydev leaks") fixed the problem with reference leaks on phydev, but the fix is device-tree specific. When the driver unloads, the reference is dropped only on DT systems. Instead, it's cleaner if up grab an reference on ACPI systems. When the driver unloads, we can drop the reference without having to check whether we're on a DT system. Signed-off-by: Timur Tabi Reviewed-by: Johan Hovold Signed-off-by: David S. Miller commit ea7a80858f57d8878b1499ea0f1b8a635cc48de7 Author: David Ahern Date: Wed Jan 11 14:29:54 2017 -0800 net: lwtunnel: Handle lwtunnel_fill_encap failure Handle failure in lwtunnel_fill_encap adding attributes to skb. Fixes: 571e722676fe ("ipv4: support for fib route lwtunnel encap attributes") Fixes: 19e42e451506 ("ipv6: support for fib route lwtunnel encap attributes") Signed-off-by: David Ahern Signed-off-by: David S. Miller commit 4b09ec4b14a168bf2c687e1f598140c3c11e9222 Author: Benjamin Coddington Date: Thu Jan 5 10:20:16 2017 -0500 nfs: Don't take a reference on fl->fl_file for LOCK operation I have reports of a crash that look like __fput() was called twice for a NFSv4.0 file. It seems possible that the state manager could try to reclaim a lock and take a reference on the fl->fl_file at the same time the file is being released if, during the close(), a signal interrupts the wait for outstanding IO while removing locks which then skips the removal of that lock. Since 83bfff23e9ed ("nfs4: have do_vfs_lock take an inode pointer") has removed the need to traverse fl->fl_file->f_inode in nfs4_lock_done(), taking that reference is no longer necessary. Signed-off-by: Benjamin Coddington Reviewed-by: Jeff Layton Signed-off-by: Trond Myklebust commit 18a3ed59d09cf81a6447aadf6931bf0c9ffec5e0 Author: Kazuya Mizuguchi Date: Thu Jan 12 13:21:06 2017 +0100 ravb: Remove Rx overflow log messages Remove Rx overflow log messages as in an environment where logging results in network traffic logging may cause further overflows. Fixes: c156633f1353 ("Renesas Ethernet AVB driver proper") Signed-off-by: Kazuya Mizuguchi [simon: reworked changelog] Signed-off-by: Simon Horman Acked-by: Sergei Shtylyov Signed-off-by: David S. Miller commit 28e46a0f2e03ab4ed0e23cace1ea89a68c8c115b Author: Elad Raz Date: Thu Jan 12 09:10:39 2017 +0100 mlxsw: pci: Fix EQE structure definition The event_data starts from address 0x00-0x0C and not from 0x08-0x014. This leads to duplication with other fields in the Event Queue Element such as sub-type, cqn and owner. Fixes: eda6500a987a0 ("mlxsw: Add PCI bus implementation") Signed-off-by: Elad Raz Signed-off-by: Jiri Pirko Signed-off-by: David S. Miller commit 400fc0106dd8c27ed84781c929c1a184785b9c79 Author: Arkadi Sharshevsky Date: Thu Jan 12 09:10:38 2017 +0100 mlxsw: switchx2: Fix memory leak at skb reallocation During transmission the skb is checked for headroom in order to add vendor specific header. In case the skb needs to be re-allocated, skb_realloc_headroom() is called to make a private copy of the original, but doesn't release it. Current code assumes that the original skb is released during reallocation and only releases it at the error path which causes a memory leak. Fix this by adding the original skb release to the main path. Fixes: d003462a50de ("mlxsw: Simplify mlxsw_sx_port_xmit function") Signed-off-by: Arkadi Sharshevsky Signed-off-by: Jiri Pirko Signed-off-by: David S. Miller commit 36bf38d158d3482119b3e159c0619b3c1539b508 Author: Arkadi Sharshevsky Date: Thu Jan 12 09:10:37 2017 +0100 mlxsw: spectrum: Fix memory leak at skb reallocation During transmission the skb is checked for headroom in order to add vendor specific header. In case the skb needs to be re-allocated, skb_realloc_headroom() is called to make a private copy of the original, but doesn't release it. Current code assumes that the original skb is released during reallocation and only releases it at the error path which causes a memory leak. Fix this by adding the original skb release to the main path. Fixes: 56ade8fe3fe1 ("mlxsw: spectrum: Add initial support for Spectrum ASIC") Signed-off-by: Arkadi Sharshevsky Reviewed-by: Ido Schimmel Signed-off-by: Jiri Pirko Signed-off-by: David S. Miller commit 01167c7b9cbf099c69fe411a228e4e9c7104e123 Author: Stefan Wahren Date: Thu Jan 5 19:24:04 2017 +0000 mmc: mxs-mmc: Fix additional cycles after transmission stop According to the code the intention is to append 8 SCK cycles instead of 4 at end of a MMC_STOP_TRANSMISSION command. But this will never happened because it's an AC command not an ADTC command. So fix this by moving the statement into the right function. Signed-off-by: Stefan Wahren Fixes: e4243f13d10e (mmc: mxs-mmc: add mmc host driver for i.MX23/28) Cc: Signed-off-by: Ulf Hansson commit e1d070c3793a2766122865a7c2142853b48808c5 Author: Hans de Goede Date: Wed Dec 21 00:19:19 2016 +0100 mmc: sdhci-acpi: Only powered up enabled acpi child devices Commit e5bbf30733f9 ("mmc: sdhci-acpi: Ensure connected devices are powered when probing") introduced code to powerup any acpi child nodes listed in the dstd. But some dstd-s list all possible devices used on some board variants, while reporting if the device is actually present and enabled in the status field of the device. So we end up calling the acpi _PS0 (power-on) method for devices which are not actually present. This does not always end well, e.g. on my cube iwork8 air tablet, this results in freezing the entire tablet as soon as the r8723bs module is loaded. This commit fixes this by checking the child device's status.present and status.enabled bits and only call acpi_device_fix_up_power() if both are set. Fixes: e5bbf30733f9 ("mmc: sdhci-acpi: Ensure connected devices are powered when probing") BugLink: https://github.com/hadess/rtl8723bs/issues/80 Signed-off-by: Hans de Goede Acked-by: Adrian Hunter Cc: Signed-off-by: Ulf Hansson commit 0719e72ccb801829a3d735d187ca8417f0930459 Author: stephen hemminger Date: Wed Jan 11 09:16:32 2017 -0800 netvsc: add rcu_read locking to netvsc callback The receive callback (in tasklet context) is using RCU to get reference to associated VF network device but this is not safe. RCU read lock needs to be held. Found by running with full lockdep debugging enabled. Fixes: f207c10d9823 ("hv_netvsc: use RCU to protect vf_netdev") Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 4ecb1d83f6abe8d49163427f4d431ebe98f8bd5f Author: Martynas Pumputis Date: Wed Jan 11 15:18:53 2017 +0000 vxlan: Set ports in flow key when doing route lookups Otherwise, a xfrm policy with sport/dport being set cannot be matched. Signed-off-by: Martynas Pumputis Signed-off-by: David S. Miller commit 19c0f40d4fca3a47b8f784a627f0467f0138ccc8 Author: hayeswang Date: Wed Jan 11 16:25:34 2017 +0800 r8152: fix the sw rx checksum is unavailable Fix the hw rx checksum is always enabled, and the user couldn't switch it to sw rx checksum. Note that the RTL_VER_01 only support sw rx checksum only. Besides, the hw rx checksum for RTL_VER_02 is disabled after commit b9a321b48af4 ("r8152: Fix broken RX checksums."). Re-enable it. Signed-off-by: Hayes Wang Signed-off-by: David S. Miller commit d2941df8fbd9708035d66d889ada4d3d160170ce Author: Johannes Berg Date: Thu Oct 20 08:52:50 2016 +0200 mac80211: recalculate min channel width on VHT opmode changes When an associated station changes its VHT operating mode this can/will affect the bandwidth it's using, and consequently we must recalculate the minimum bandwidth we need to use. Failure to do so can lead to one of two scenarios: 1) we use a too high bandwidth, this is benign 2) we use a too narrow bandwidth, causing rate control and actual PHY configuration to be out of sync, which can in turn cause problems/crashes Signed-off-by: Johannes Berg commit 96aa2e7cf126773b16c6c19b7474a8a38d3c707e Author: Johannes Berg Date: Fri Oct 7 12:23:49 2016 +0200 mac80211: calculate min channel width correctly In the current minimum chandef code there's an issue in that the recalculation can happen after rate control is initialized for a station that has a wider bandwidth than the current chanctx, and then rate control can immediately start using those higher rates which could cause problems. Observe that first of all that this problem is because we don't take non-associated and non-uploaded stations into account. The restriction to non-associated is quite pointless and is one of the causes for the problem described above, since the rate init will happen before the station is set to associated; no frames could actually be sent until associated, but the rate table can already contain higher rates and that might cause problems. Also, rejecting non-uploaded stations is wrong, since the rate control can select higher rates for those as well. Secondly, it's then necessary to recalculate the minimal config before initializing rate control, so that when rate control is initialized, the higher rates are already available. This can be done easily by adding the necessary function call in rate init. Change-Id: Ib9bc02d34797078db55459d196993f39dcd43070 Signed-off-by: Johannes Berg commit 06f7c88c107fb469f4f1344142e80df5175c6836 Author: Beni Lev Date: Tue Jul 19 19:28:56 2016 +0300 cfg80211: consider VHT opmode on station update Currently, this attribute is only fetched on station addition, but not on station change. Since this info is only present in the assoc request, with full station state support in the driver it cannot be present when the station is added. Thus, add support for changing the VHT opmode on station update if done before (or while) the station is marked as associated. After this, ignore it, since it used to be ignored. Signed-off-by: Beni Lev Signed-off-by: Johannes Berg commit d7f842442f766db3f39fc5d166ddcc24bf817056 Author: Emmanuel Grumbach Date: Tue Oct 25 10:32:16 2016 +0300 mac80211: fix the TID on NDPs sent as EOSP carrier In the commit below, I forgot to translate the mac80211's AC to QoS IE order. Moreover, the condition in the if was wrong. Fix both issues. This bug would hit only with clients that didn't set all the ACs as delivery enabled. Fixes: f438ceb81d4 ("mac80211: uapsd_queues is in QoS IE order") Signed-off-by: Emmanuel Grumbach Signed-off-by: Johannes Berg commit c38c39bf7cc04d688291f382469e84ec2a8548a4 Author: Cedric Izoard Date: Wed Jan 11 14:39:07 2017 +0000 mac80211: Fix headroom allocation when forwarding mesh pkt This patch fix issue introduced by my previous commit that tried to ensure enough headroom was present, and instead broke it. When forwarding mesh pkt, mac80211 may also add security header, and it must therefore be taken into account in the needed headroom. Fixes: d8da0b5d64d5 ("mac80211: Ensure enough headroom when forwarding mesh pkt") Signed-off-by: Cedric Izoard Signed-off-by: Johannes Berg commit ddc37832a1349f474c4532de381498020ed71d31 Author: Mark Rutland Date: Fri Jan 6 13:12:47 2017 +0100 ARM: 8634/1: hw_breakpoint: blacklist Scorpion CPUs On APQ8060, the kernel crashes in arch_hw_breakpoint_init, taking an undefined instruction trap within write_wb_reg. This is because Scorpion CPUs erroneously appear to set DBGPRSR.SPD when WFI is issued, even if the core is not powered down. When DBGPRSR.SPD is set, breakpoint and watchpoint registers are treated as undefined. It's possible to trigger similar crashes later on from userspace, by requesting the kernel to install a breakpoint or watchpoint, as we can go idle at any point between the reset of the debug registers and their later use. This has always been the case. Given that this has always been broken, no-one has complained until now, and there is no clear workaround, disable hardware breakpoints and watchpoints on Scorpion to avoid these issues. Signed-off-by: Mark Rutland Reported-by: Linus Walleij Reviewed-by: Stephen Boyd Acked-by: Will Deacon Cc: Russell King Cc: stable@vger.kernel.org Signed-off-by: Russell King commit 270c8cf1cacc69cb8d99dea812f06067a45e4609 Author: Rabin Vincent Date: Wed Nov 23 13:02:32 2016 +0100 ARM: 8632/1: ftrace: fix syscall name matching ARM has a few system calls (most notably mmap) for which the names of the functions which are referenced in the syscall table do not match the names of the syscall tracepoints. As a consequence of this, these tracepoints are not made available. Implement arch_syscall_match_sym_name to fix this and allow tracing even these system calls. Signed-off-by: Rabin Vincent Signed-off-by: Russell King commit 19a91dd4e39e755d650444da7f3a571b40a11093 Author: Heinrich Schuchardt Date: Fri Dec 23 16:01:08 2016 +0100 MMC: meson: avoid possible NULL dereference No actual segmentation faults were observed but the coding is at least inconsistent. irqreturn_t meson_mmc_irq(): We should not dereference host before checking it. meson_mmc_irq_thread(): If cmd or mrq are NULL we should not dereference them after writing a warning. Fixes: 51c5d8447bd7 MMC: meson: initial support for GX platforms Signed-off-by: Heinrich Schuchardt Acked-by: Kevin Hilman Signed-off-by: Ulf Hansson commit eeb0d56fab4cd7848cf2be6704fa48900dbc1381 Author: Johannes Berg Date: Wed Dec 14 16:47:43 2016 +0100 mac80211: implement multicast forwarding on fast-RX path In AP (or VLAN) mode, when unicast 802.11 packets are received, they might actually be multicast after conversion. In this case the fast-RX path didn't handle them properly to send them back to the wireless medium. Implement that by copying the SKB and sending it back out. The possible alternative would be to just punt the packet back to the regular (slow) RX path, but since we have almost all of the required code here already it's not so complicated to add here. Punting it back would also mean acquiring the spinlock, which would be bad for the stated purpose of the fast-RX path, to enable well-performing parallel RX. Cc: stable@vger.kernel.org Signed-off-by: Johannes Berg commit cf9e1672a66c49ed8903c01b4c380a2f2dc91b40 Author: Vladimir Zapolskiy Date: Mon Dec 5 03:47:10 2016 +0200 mtd: nand: lpc32xx: fix invalid error handling of a requested irq Semantics of NR_IRQS is different on machines with SPARSE_IRQ option disabled or enabled, in the latter case IRQs are allocated starting at least from the value specified by NR_IRQS and going upwards, so the check of (irq >= NR_IRQ) to decide about an error code returned by platform_get_irq() is completely invalid, don't attempt to overrule irq subsystem in the driver. The change fixes LPC32xx NAND MLC driver initialization on boot. Fixes: 8cb17b5ed017 ("irqchip: Add LPC32xx interrupt controller driver") Cc: stable@kernel.org # v4.7+ Signed-off-by: Vladimir Zapolskiy Acked-by: Sylvain Lemieux Signed-off-by: Boris Brezillon commit 8043d25b3c0fa0a8f531333707f682f03b6febdb Author: Marc Gonzalez Date: Tue Jan 3 11:01:14 2017 +0100 mtd: nand: tango: Reset pbus to raw mode in probe Linux should not expect the boot loader to properly configure the peripheral bus "pad mode", so reset PBUS_PAD_MODE to raw. Signed-off-by: Marc Gonzalez Signed-off-by: Boris Brezillon commit 7165b8ad36f8bda42a5a8aa059b9a5071acc2210 Author: Marc Gonzalez Date: Mon Dec 19 15:30:12 2016 +0100 mtd: nand: tango: Update DT binding description Visually separate register ranges (address/size pairs) in reg prop. Change DMA channel name, for consistency with other drivers. Signed-off-by: Marc Gonzalez Signed-off-by: Boris Brezillon commit f0fcdc506b76e924c60fa607bba5872ca4745476 Author: Randy Dunlap Date: Sun Jan 1 18:58:27 2017 -0800 mtd: nand: oxnas_nand: fix build errors on arch/um, require HAS_IOMEM Fix build errors on arch/um, which does not support HAS_IOMEM, while the oxnas_nand.c driver uses interfaces that are supplied by HAS_IOMEM. (loadable module build:) ERROR: "devm_ioremap_resource" [drivers/mtd/nand/oxnas_nand.ko] undefined! or (built-in build:) drivers/built-in.o: In function `oxnas_nand_probe': drivers/mtd/nand/oxnas_nand.c:102: undefined reference to `devm_ioremap_resource' Fixes: 668592492409 ("mtd: nand: Add OX820 NAND Support") Signed-off-by: Randy Dunlap Reported-by: kbuild test robot Acked-by: Neil Armstrong Signed-off-by: Boris Brezillon commit a2724663494f7313f53da10d8c0a729c5e3c4dea Author: Hauke Mehrtens Date: Mon Dec 5 22:14:37 2016 +0100 mtd: nand: xway: fix build because of module functions Remove the usage of modules functions to make this driver compile again. Otherwise an include of linux/modules.h would be needed. Fixes: 024366750c2e ("mtd: nand: xway: convert to normal platform driver") Cc: Signed-off-by: Hauke Mehrtens Signed-off-by: Boris Brezillon commit 73529c872a189c747bdb528ce9b85b67b0e28dec Author: Hauke Mehrtens Date: Mon Dec 5 22:14:36 2016 +0100 mtd: nand: xway: disable module support The xway_nand driver accesses the ltq_ebu_membase symbol which is not exported. This also should not get exported and we should handle the EBU interface in a better way later. This quick fix just deactivated support for building as module. Fixes: 99f2b107924c ("mtd: lantiq: Add NAND support on Lantiq XWAY SoC.") Cc: Signed-off-by: Hauke Mehrtens Signed-off-by: Boris Brezillon commit 7e164ce4e8ecd7e9a58a83750bd3ee03125df154 Author: Sedat Dilek Date: Mon Dec 26 11:05:11 2016 +0100 perf/x86/amd/ibs: Fix typo after cleanup state names in cpu/hotplug Fix a small typo after cleanup state names in cpu/hotplug. The new convention is 'subsys/xxx/yyy:state' where "state" here is called "starting" not "STARTING". Fixes: 73c1b41e63f0 ("cpu/hotplug: Cleanup state names") Signed-off-by: Sedat Dilek Cc: Peter Zijlstra Cc: Borislav Petkov Cc: Paul Gortmaker Link: http://lkml.kernel.org/r/20161226100511.8662-1-sedat.dilek@gmail.com Signed-off-by: Thomas Gleixner --- .../devicetree/bindings/mtd/tango-nand.txt | 6 +- .../devicetree/bindings/net/ti,dp83867.txt | 6 +- MAINTAINERS | 10 +- arch/arm/include/asm/cputype.h | 3 + arch/arm/include/asm/ftrace.h | 18 ++++ arch/arm/include/{ => uapi}/asm/types.h | 6 +- arch/arm/kernel/hw_breakpoint.c | 16 ++++ arch/arm/kernel/smp_tlb.c | 7 ++ arch/x86/events/amd/ibs.c | 2 +- arch/x86/events/intel/core.c | 7 +- drivers/clocksource/exynos_mct.c | 1 + drivers/mmc/core/mmc_ops.c | 25 +++-- drivers/mmc/host/meson-gx-mmc.c | 8 +- drivers/mmc/host/mxs-mmc.c | 6 +- drivers/mmc/host/sdhci-acpi.c | 3 +- drivers/mtd/nand/Kconfig | 3 +- drivers/mtd/nand/lpc32xx_mlc.c | 2 +- drivers/mtd/nand/tango_nand.c | 4 +- drivers/mtd/nand/xway_nand.c | 5 +- drivers/net/ethernet/broadcom/bcmsysport.c | 25 +++-- .../net/ethernet/cavium/thunder/thunder_bgx.c | 11 ++- drivers/net/ethernet/emulex/benet/be_cmds.c | 2 +- drivers/net/ethernet/emulex/benet/be_main.c | 18 +++- drivers/net/ethernet/mellanox/mlx4/cq.c | 38 ++++---- .../net/ethernet/mellanox/mlx4/en_netdev.c | 5 +- drivers/net/ethernet/mellanox/mlx4/eq.c | 23 +++-- .../ethernet/mellanox/mlx4/resource_tracker.c | 5 +- .../net/ethernet/mellanox/mlx5/core/en_tc.c | 11 ++- drivers/net/ethernet/mellanox/mlxsw/pci_hw.h | 8 +- .../net/ethernet/mellanox/mlxsw/spectrum.c | 1 + .../net/ethernet/mellanox/mlxsw/switchx2.c | 1 + drivers/net/ethernet/qualcomm/emac/emac-phy.c | 7 ++ drivers/net/ethernet/qualcomm/emac/emac.c | 6 +- drivers/net/ethernet/renesas/ravb_main.c | 21 ++-- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 19 ++-- drivers/net/ethernet/ti/cpmac.c | 2 +- drivers/net/hyperv/netvsc_drv.c | 3 + drivers/net/ieee802154/at86rf230.c | 4 +- drivers/net/ieee802154/atusb.c | 59 ++++++++---- drivers/net/phy/dp83867.c | 8 +- drivers/net/usb/r8152.c | 7 +- drivers/net/vxlan.c | 13 ++- fs/nfs/nfs4proc.c | 29 +++--- fs/nfs/nfs4state.c | 1 - fs/nfsd/nfs4xdr.c | 4 +- include/linux/bpf.h | 2 +- include/linux/cpuhotplug.h | 2 + include/linux/filter.h | 6 +- include/linux/kernel.h | 4 +- include/linux/rcupdate.h | 4 + include/linux/sunrpc/svc_xprt.h | 1 + include/linux/tcp.h | 7 +- include/uapi/linux/nl80211.h | 4 +- include/uapi/linux/pkt_cls.h | 2 +- include/uapi/linux/tc_act/tc_bpf.h | 2 +- kernel/bpf/core.c | 14 +-- kernel/bpf/syscall.c | 8 +- kernel/bpf/verifier.c | 2 +- kernel/cpu.c | 22 ++++- kernel/module.c | 2 +- kernel/panic.c | 2 +- kernel/rcu/rcu.h | 1 + kernel/rcu/tiny.c | 4 - kernel/rcu/tiny_plugin.h | 9 +- kernel/rcu/tree.c | 33 ++++--- kernel/rcu/tree_exp.h | 52 +++++++--- kernel/rcu/tree_plugin.h | 2 +- kernel/rcu/update.c | 38 ++++++-- lib/swiotlb.c | 6 +- net/ax25/ax25_subr.c | 2 +- net/ipv4/fib_semantics.c | 11 ++- net/ipv4/route.c | 2 +- net/ipv4/tcp_fastopen.c | 2 +- net/ipv6/ip6_tunnel.c | 4 +- net/ipv6/mcast.c | 51 ++++++---- net/ipv6/route.c | 3 +- net/ipv6/seg6_hmac.c | 2 +- net/ipv6/seg6_iptunnel.c | 4 + net/mac80211/chan.c | 3 - net/mac80211/iface.c | 21 ++++ net/mac80211/main.c | 13 ++- net/mac80211/rate.c | 2 + net/mac80211/rx.c | 38 ++++---- net/mac80211/sta_info.c | 4 +- net/mac80211/tx.c | 17 ++-- net/mac80211/vht.c | 4 +- net/openvswitch/conntrack.c | 6 +- net/sched/act_api.c | 5 +- net/sched/act_bpf.c | 5 +- net/sched/cls_bpf.c | 4 +- net/sunrpc/auth_gss/svcauth_gss.c | 2 +- net/sunrpc/svc_xprt.c | 10 +- net/sunrpc/xprtrdma/svc_rdma_recvfrom.c | 2 - net/tipc/discover.c | 4 +- net/tipc/link.c | 2 +- net/tipc/msg.c | 16 ++-- net/tipc/msg.h | 2 +- net/tipc/name_distr.c | 2 +- net/wireless/nl80211.c | 15 +++ tools/perf/util/probe-event.c | 95 ++++++++++++------- tools/perf/util/probe-finder.c | 15 ++- tools/perf/util/probe-finder.h | 3 + 102 files changed, 712 insertions(+), 357 deletions(-) rename arch/arm/include/{ => uapi}/asm/types.h (94%) diff --git a/Documentation/devicetree/bindings/mtd/tango-nand.txt b/Documentation/devicetree/bindings/mtd/tango-nand.txt index ad5a02f2ac8c9d..cd1bf2ac9055fc 100644 --- a/Documentation/devicetree/bindings/mtd/tango-nand.txt +++ b/Documentation/devicetree/bindings/mtd/tango-nand.txt @@ -5,7 +5,7 @@ Required properties: - compatible: "sigma,smp8758-nand" - reg: address/size of nfc_reg, nfc_mem, and pbus_reg - dmas: reference to the DMA channel used by the controller -- dma-names: "nfc_sbox" +- dma-names: "rxtx" - clocks: reference to the system clock - #address-cells: <1> - #size-cells: <0> @@ -17,9 +17,9 @@ Example: nandc: nand-controller@2c000 { compatible = "sigma,smp8758-nand"; - reg = <0x2c000 0x30 0x2d000 0x800 0x20000 0x1000>; + reg = <0x2c000 0x30>, <0x2d000 0x800>, <0x20000 0x1000>; dmas = <&dma0 3>; - dma-names = "nfc_sbox"; + dma-names = "rxtx"; clocks = <&clkgen SYS_CLK>; #address-cells = <1>; #size-cells = <0>; diff --git a/Documentation/devicetree/bindings/net/ti,dp83867.txt b/Documentation/devicetree/bindings/net/ti,dp83867.txt index 85bf945b898f0d..afe9630a5e7de1 100644 --- a/Documentation/devicetree/bindings/net/ti,dp83867.txt +++ b/Documentation/devicetree/bindings/net/ti,dp83867.txt @@ -3,9 +3,11 @@ Required properties: - reg - The ID number for the phy, usually a small integer - ti,rx-internal-delay - RGMII Receive Clock Delay - see dt-bindings/net/ti-dp83867.h - for applicable values + for applicable values. Required only if interface type is + PHY_INTERFACE_MODE_RGMII_ID or PHY_INTERFACE_MODE_RGMII_RXID - ti,tx-internal-delay - RGMII Transmit Clock Delay - see dt-bindings/net/ti-dp83867.h - for applicable values + for applicable values. Required only if interface type is + PHY_INTERFACE_MODE_RGMII_ID or PHY_INTERFACE_MODE_RGMII_TXID - ti,fifo-depth - Transmitt FIFO depth- see dt-bindings/net/ti-dp83867.h for applicable values diff --git a/MAINTAINERS b/MAINTAINERS index c36976d3bd1a72..4432e23d56faa4 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -976,6 +976,7 @@ M: Russell King L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.armlinux.org.uk/ S: Maintained +T: git git://git.armlinux.org.uk/~rmk/linux-arm.git F: arch/arm/ ARM SUB-ARCHITECTURES @@ -1153,6 +1154,7 @@ ARM/CLKDEV SUPPORT M: Russell King L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained +T: git git://git.armlinux.org.uk/~rmk/linux-arm.git clkdev F: arch/arm/include/asm/clkdev.h F: drivers/clk/clkdev.c @@ -7697,8 +7699,10 @@ F: drivers/net/dsa/mv88e6xxx/ F: Documentation/devicetree/bindings/net/dsa/marvell.txt MARVELL ARMADA DRM SUPPORT -M: Russell King +M: Russell King S: Maintained +T: git git://git.armlinux.org.uk/~rmk/linux-arm.git drm-armada-devel +T: git git://git.armlinux.org.uk/~rmk/linux-arm.git drm-armada-fixes F: drivers/gpu/drm/armada/ F: include/uapi/drm/armada_drm.h F: Documentation/devicetree/bindings/display/armada/ @@ -8903,8 +8907,10 @@ S: Supported F: drivers/nfc/nxp-nci NXP TDA998X DRM DRIVER -M: Russell King +M: Russell King S: Supported +T: git git://git.armlinux.org.uk/~rmk/linux-arm.git drm-tda998x-devel +T: git git://git.armlinux.org.uk/~rmk/linux-arm.git drm-tda998x-fixes F: drivers/gpu/drm/i2c/tda998x_drv.c F: include/drm/i2c/tda998x.h diff --git a/arch/arm/include/asm/cputype.h b/arch/arm/include/asm/cputype.h index 522b5feb4eaa34..b62eaeb147aa9a 100644 --- a/arch/arm/include/asm/cputype.h +++ b/arch/arm/include/asm/cputype.h @@ -94,6 +94,9 @@ #define ARM_CPU_XSCALE_ARCH_V2 0x4000 #define ARM_CPU_XSCALE_ARCH_V3 0x6000 +/* Qualcomm implemented cores */ +#define ARM_CPU_PART_SCORPION 0x510002d0 + extern unsigned int processor_id; #ifdef CONFIG_CPU_CP15 diff --git a/arch/arm/include/asm/ftrace.h b/arch/arm/include/asm/ftrace.h index bfe2a2f5a644e8..22b73112b75f20 100644 --- a/arch/arm/include/asm/ftrace.h +++ b/arch/arm/include/asm/ftrace.h @@ -54,6 +54,24 @@ static inline void *return_address(unsigned int level) #define ftrace_return_address(n) return_address(n) +#define ARCH_HAS_SYSCALL_MATCH_SYM_NAME + +static inline bool arch_syscall_match_sym_name(const char *sym, + const char *name) +{ + if (!strcmp(sym, "sys_mmap2")) + sym = "sys_mmap_pgoff"; + else if (!strcmp(sym, "sys_statfs64_wrapper")) + sym = "sys_statfs64"; + else if (!strcmp(sym, "sys_fstatfs64_wrapper")) + sym = "sys_fstatfs64"; + else if (!strcmp(sym, "sys_arm_fadvise64_64")) + sym = "sys_fadvise64_64"; + + /* Ignore case since sym may start with "SyS" instead of "sys" */ + return !strcasecmp(sym, name); +} + #endif /* ifndef __ASSEMBLY__ */ #endif /* _ASM_ARM_FTRACE */ diff --git a/arch/arm/include/asm/types.h b/arch/arm/include/uapi/asm/types.h similarity index 94% rename from arch/arm/include/asm/types.h rename to arch/arm/include/uapi/asm/types.h index a53cdb8f068c2f..9435a42f575e02 100644 --- a/arch/arm/include/asm/types.h +++ b/arch/arm/include/uapi/asm/types.h @@ -1,5 +1,5 @@ -#ifndef _ASM_TYPES_H -#define _ASM_TYPES_H +#ifndef _UAPI_ASM_TYPES_H +#define _UAPI_ASM_TYPES_H #include @@ -37,4 +37,4 @@ #define __UINTPTR_TYPE__ unsigned long #endif -#endif /* _ASM_TYPES_H */ +#endif /* _UAPI_ASM_TYPES_H */ diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c index 188180b5523de0..be3b3fbd382fbb 100644 --- a/arch/arm/kernel/hw_breakpoint.c +++ b/arch/arm/kernel/hw_breakpoint.c @@ -1063,6 +1063,22 @@ static int __init arch_hw_breakpoint_init(void) return 0; } + /* + * Scorpion CPUs (at least those in APQ8060) seem to set DBGPRSR.SPD + * whenever a WFI is issued, even if the core is not powered down, in + * violation of the architecture. When DBGPRSR.SPD is set, accesses to + * breakpoint and watchpoint registers are treated as undefined, so + * this results in boot time and runtime failures when these are + * accessed and we unexpectedly take a trap. + * + * It's not clear if/how this can be worked around, so we blacklist + * Scorpion CPUs to avoid these issues. + */ + if (read_cpuid_part() == ARM_CPU_PART_SCORPION) { + pr_info("Scorpion CPU detected. Hardware breakpoints and watchpoints disabled\n"); + return 0; + } + has_ossr = core_has_os_save_restore(); /* Determine how many BRPs/WRPs are available. */ diff --git a/arch/arm/kernel/smp_tlb.c b/arch/arm/kernel/smp_tlb.c index 22313cb5336257..9af0701f7094be 100644 --- a/arch/arm/kernel/smp_tlb.c +++ b/arch/arm/kernel/smp_tlb.c @@ -9,6 +9,7 @@ */ #include #include +#include #include #include @@ -40,8 +41,11 @@ static inline void ipi_flush_tlb_mm(void *arg) static inline void ipi_flush_tlb_page(void *arg) { struct tlb_args *ta = (struct tlb_args *)arg; + unsigned int __ua_flags = uaccess_save_and_enable(); local_flush_tlb_page(ta->ta_vma, ta->ta_start); + + uaccess_restore(__ua_flags); } static inline void ipi_flush_tlb_kernel_page(void *arg) @@ -54,8 +58,11 @@ static inline void ipi_flush_tlb_kernel_page(void *arg) static inline void ipi_flush_tlb_range(void *arg) { struct tlb_args *ta = (struct tlb_args *)arg; + unsigned int __ua_flags = uaccess_save_and_enable(); local_flush_tlb_range(ta->ta_vma, ta->ta_start, ta->ta_end); + + uaccess_restore(__ua_flags); } static inline void ipi_flush_tlb_kernel_range(void *arg) diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c index 05612a2529c8bb..496e60391fac68 100644 --- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -1010,7 +1010,7 @@ static __init int amd_ibs_init(void) * all online cpus. */ cpuhp_setup_state(CPUHP_AP_PERF_X86_AMD_IBS_STARTING, - "perf/x86/amd/ibs:STARTING", + "perf/x86/amd/ibs:starting", x86_pmu_amd_ibs_starting_cpu, x86_pmu_amd_ibs_dying_cpu); diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index d611cab214a605..eb1484c86bb4b4 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3176,13 +3176,16 @@ static void intel_pmu_cpu_starting(int cpu) if (x86_pmu.flags & PMU_FL_EXCL_CNTRS) { for_each_cpu(i, topology_sibling_cpumask(cpu)) { + struct cpu_hw_events *sibling; struct intel_excl_cntrs *c; - c = per_cpu(cpu_hw_events, i).excl_cntrs; + sibling = &per_cpu(cpu_hw_events, i); + c = sibling->excl_cntrs; if (c && c->core_id == core_id) { cpuc->kfree_on_online[1] = cpuc->excl_cntrs; cpuc->excl_cntrs = c; - cpuc->excl_thread_id = 1; + if (!sibling->excl_thread_id) + cpuc->excl_thread_id = 1; break; } } diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c index 4da1dc2278bd7f..670ff0f25b6712 100644 --- a/drivers/clocksource/exynos_mct.c +++ b/drivers/clocksource/exynos_mct.c @@ -495,6 +495,7 @@ static int exynos4_mct_dying_cpu(unsigned int cpu) if (mct_int_type == MCT_INT_SPI) { if (evt->irq != -1) disable_irq_nosync(evt->irq); + exynos4_mct_write(0x1, mevt->base + MCT_L_INT_CSTAT_OFFSET); } else { disable_percpu_irq(mct_irqs[MCT_L0_IRQ]); } diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c index b11c3455b040c5..e6ea8503f40c84 100644 --- a/drivers/mmc/core/mmc_ops.c +++ b/drivers/mmc/core/mmc_ops.c @@ -506,9 +506,6 @@ static int mmc_poll_for_busy(struct mmc_card *card, unsigned int timeout_ms, } } while (busy); - if (host->ops->card_busy && send_status) - return mmc_switch_status(card); - return 0; } @@ -577,24 +574,26 @@ int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value, if (!use_busy_signal) goto out; - /* Switch to new timing before poll and check switch status. */ - if (timing) - mmc_set_timing(host, timing); - /*If SPI or used HW busy detection above, then we don't need to poll. */ if (((host->caps & MMC_CAP_WAIT_WHILE_BUSY) && use_r1b_resp) || - mmc_host_is_spi(host)) { - if (send_status) - err = mmc_switch_status(card); + mmc_host_is_spi(host)) goto out_tim; - } /* Let's try to poll to find out when the command is completed. */ err = mmc_poll_for_busy(card, timeout_ms, send_status, retry_crc_err); + if (err) + goto out; out_tim: - if (err && timing) - mmc_set_timing(host, old_timing); + /* Switch to new timing before check switch status. */ + if (timing) + mmc_set_timing(host, timing); + + if (send_status) { + err = mmc_switch_status(card); + if (err && timing) + mmc_set_timing(host, old_timing); + } out: mmc_retune_release(host); diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c index b352760c041ee5..09739352834c82 100644 --- a/drivers/mmc/host/meson-gx-mmc.c +++ b/drivers/mmc/host/meson-gx-mmc.c @@ -578,13 +578,15 @@ static irqreturn_t meson_mmc_irq(int irq, void *dev_id) { struct meson_host *host = dev_id; struct mmc_request *mrq; - struct mmc_command *cmd = host->cmd; + struct mmc_command *cmd; u32 irq_en, status, raw_status; irqreturn_t ret = IRQ_HANDLED; if (WARN_ON(!host)) return IRQ_NONE; + cmd = host->cmd; + mrq = host->mrq; if (WARN_ON(!mrq)) @@ -670,10 +672,10 @@ static irqreturn_t meson_mmc_irq_thread(int irq, void *dev_id) int ret = IRQ_HANDLED; if (WARN_ON(!mrq)) - ret = IRQ_NONE; + return IRQ_NONE; if (WARN_ON(!cmd)) - ret = IRQ_NONE; + return IRQ_NONE; data = cmd->data; if (data) { diff --git a/drivers/mmc/host/mxs-mmc.c b/drivers/mmc/host/mxs-mmc.c index 44ecebd1ea8c18..c8b8ac66ff7e3a 100644 --- a/drivers/mmc/host/mxs-mmc.c +++ b/drivers/mmc/host/mxs-mmc.c @@ -309,6 +309,9 @@ static void mxs_mmc_ac(struct mxs_mmc_host *host) cmd0 = BF_SSP(cmd->opcode, CMD0_CMD); cmd1 = cmd->arg; + if (cmd->opcode == MMC_STOP_TRANSMISSION) + cmd0 |= BM_SSP_CMD0_APPEND_8CYC; + if (host->sdio_irq_en) { ctrl0 |= BM_SSP_CTRL0_SDIO_IRQ_CHECK; cmd0 |= BM_SSP_CMD0_CONT_CLKING_EN | BM_SSP_CMD0_SLOW_CLKING_EN; @@ -417,8 +420,7 @@ static void mxs_mmc_adtc(struct mxs_mmc_host *host) ssp->base + HW_SSP_BLOCK_SIZE); } - if ((cmd->opcode == MMC_STOP_TRANSMISSION) || - (cmd->opcode == SD_IO_RW_EXTENDED)) + if (cmd->opcode == SD_IO_RW_EXTENDED) cmd0 |= BM_SSP_CMD0_APPEND_8CYC; cmd1 = cmd->arg; diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c index 160f695cc09c61..278a5a435ab76b 100644 --- a/drivers/mmc/host/sdhci-acpi.c +++ b/drivers/mmc/host/sdhci-acpi.c @@ -395,7 +395,8 @@ static int sdhci_acpi_probe(struct platform_device *pdev) /* Power on the SDHCI controller and its children */ acpi_device_fix_up_power(device); list_for_each_entry(child, &device->children, node) - acpi_device_fix_up_power(child); + if (child->status.present && child->status.enabled) + acpi_device_fix_up_power(child); if (acpi_bus_get_status(device) || !device->status.present) return -ENODEV; diff --git a/drivers/mtd/nand/Kconfig b/drivers/mtd/nand/Kconfig index 353a9ddf6b975d..9ce5dcb4abd0f5 100644 --- a/drivers/mtd/nand/Kconfig +++ b/drivers/mtd/nand/Kconfig @@ -426,6 +426,7 @@ config MTD_NAND_ORION config MTD_NAND_OXNAS tristate "NAND Flash support for Oxford Semiconductor SoC" + depends on HAS_IOMEM help This enables the NAND flash controller on Oxford Semiconductor SoCs. @@ -540,7 +541,7 @@ config MTD_NAND_FSMC Flexible Static Memory Controller (FSMC) config MTD_NAND_XWAY - tristate "Support for NAND on Lantiq XWAY SoC" + bool "Support for NAND on Lantiq XWAY SoC" depends on LANTIQ && SOC_TYPE_XWAY help Enables support for NAND Flash chips on Lantiq XWAY SoCs. NAND is attached diff --git a/drivers/mtd/nand/lpc32xx_mlc.c b/drivers/mtd/nand/lpc32xx_mlc.c index 5553a5d9efd114..846a66c1b1338f 100644 --- a/drivers/mtd/nand/lpc32xx_mlc.c +++ b/drivers/mtd/nand/lpc32xx_mlc.c @@ -775,7 +775,7 @@ static int lpc32xx_nand_probe(struct platform_device *pdev) init_completion(&host->comp_controller); host->irq = platform_get_irq(pdev, 0); - if ((host->irq < 0) || (host->irq >= NR_IRQS)) { + if (host->irq < 0) { dev_err(&pdev->dev, "failed to get platform irq\n"); res = -EINVAL; goto err_exit3; diff --git a/drivers/mtd/nand/tango_nand.c b/drivers/mtd/nand/tango_nand.c index 28c7f474be77b8..4a5e948c62df1b 100644 --- a/drivers/mtd/nand/tango_nand.c +++ b/drivers/mtd/nand/tango_nand.c @@ -632,11 +632,13 @@ static int tango_nand_probe(struct platform_device *pdev) if (IS_ERR(nfc->pbus_base)) return PTR_ERR(nfc->pbus_base); + writel_relaxed(MODE_RAW, nfc->pbus_base + PBUS_PAD_MODE); + clk = clk_get(&pdev->dev, NULL); if (IS_ERR(clk)) return PTR_ERR(clk); - nfc->chan = dma_request_chan(&pdev->dev, "nfc_sbox"); + nfc->chan = dma_request_chan(&pdev->dev, "rxtx"); if (IS_ERR(nfc->chan)) return PTR_ERR(nfc->chan); diff --git a/drivers/mtd/nand/xway_nand.c b/drivers/mtd/nand/xway_nand.c index 1f2948c0c458d6..895101a5e68645 100644 --- a/drivers/mtd/nand/xway_nand.c +++ b/drivers/mtd/nand/xway_nand.c @@ -232,7 +232,6 @@ static const struct of_device_id xway_nand_match[] = { { .compatible = "lantiq,nand-xway" }, {}, }; -MODULE_DEVICE_TABLE(of, xway_nand_match); static struct platform_driver xway_nand_driver = { .probe = xway_nand_probe, @@ -243,6 +242,4 @@ static struct platform_driver xway_nand_driver = { }, }; -module_platform_driver(xway_nand_driver); - -MODULE_LICENSE("GPL"); +builtin_platform_driver(xway_nand_driver); diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c index 7e8cf213fd813d..744ed6ddaf3739 100644 --- a/drivers/net/ethernet/broadcom/bcmsysport.c +++ b/drivers/net/ethernet/broadcom/bcmsysport.c @@ -710,11 +710,8 @@ static unsigned int __bcm_sysport_tx_reclaim(struct bcm_sysport_priv *priv, unsigned int c_index, last_c_index, last_tx_cn, num_tx_cbs; unsigned int pkts_compl = 0, bytes_compl = 0; struct bcm_sysport_cb *cb; - struct netdev_queue *txq; u32 hw_ind; - txq = netdev_get_tx_queue(ndev, ring->index); - /* Compute how many descriptors have been processed since last call */ hw_ind = tdma_readl(priv, TDMA_DESC_RING_PROD_CONS_INDEX(ring->index)); c_index = (hw_ind >> RING_CONS_INDEX_SHIFT) & RING_CONS_INDEX_MASK; @@ -745,9 +742,6 @@ static unsigned int __bcm_sysport_tx_reclaim(struct bcm_sysport_priv *priv, ring->c_index = c_index; - if (netif_tx_queue_stopped(txq) && pkts_compl) - netif_tx_wake_queue(txq); - netif_dbg(priv, tx_done, ndev, "ring=%d c_index=%d pkts_compl=%d, bytes_compl=%d\n", ring->index, ring->c_index, pkts_compl, bytes_compl); @@ -759,16 +753,33 @@ static unsigned int __bcm_sysport_tx_reclaim(struct bcm_sysport_priv *priv, static unsigned int bcm_sysport_tx_reclaim(struct bcm_sysport_priv *priv, struct bcm_sysport_tx_ring *ring) { + struct netdev_queue *txq; unsigned int released; unsigned long flags; + txq = netdev_get_tx_queue(priv->netdev, ring->index); + spin_lock_irqsave(&ring->lock, flags); released = __bcm_sysport_tx_reclaim(priv, ring); + if (released) + netif_tx_wake_queue(txq); + spin_unlock_irqrestore(&ring->lock, flags); return released; } +/* Locked version of the per-ring TX reclaim, but does not wake the queue */ +static void bcm_sysport_tx_clean(struct bcm_sysport_priv *priv, + struct bcm_sysport_tx_ring *ring) +{ + unsigned long flags; + + spin_lock_irqsave(&ring->lock, flags); + __bcm_sysport_tx_reclaim(priv, ring); + spin_unlock_irqrestore(&ring->lock, flags); +} + static int bcm_sysport_tx_poll(struct napi_struct *napi, int budget) { struct bcm_sysport_tx_ring *ring = @@ -1252,7 +1263,7 @@ static void bcm_sysport_fini_tx_ring(struct bcm_sysport_priv *priv, napi_disable(&ring->napi); netif_napi_del(&ring->napi); - bcm_sysport_tx_reclaim(priv, ring); + bcm_sysport_tx_clean(priv, ring); kfree(ring->cbs); ring->cbs = NULL; diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c index 9211c750e06426..2f85b64f01fa06 100644 --- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c +++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c @@ -47,8 +47,9 @@ struct lmac { struct bgx { u8 bgx_id; struct lmac lmac[MAX_LMAC_PER_BGX]; - int lmac_count; + u8 lmac_count; u8 max_lmac; + u8 acpi_lmac_idx; void __iomem *reg_base; struct pci_dev *pdev; bool is_dlm; @@ -1143,13 +1144,13 @@ static acpi_status bgx_acpi_register_phy(acpi_handle handle, if (acpi_bus_get_device(handle, &adev)) goto out; - acpi_get_mac_address(dev, adev, bgx->lmac[bgx->lmac_count].mac); + acpi_get_mac_address(dev, adev, bgx->lmac[bgx->acpi_lmac_idx].mac); - SET_NETDEV_DEV(&bgx->lmac[bgx->lmac_count].netdev, dev); + SET_NETDEV_DEV(&bgx->lmac[bgx->acpi_lmac_idx].netdev, dev); - bgx->lmac[bgx->lmac_count].lmacid = bgx->lmac_count; + bgx->lmac[bgx->acpi_lmac_idx].lmacid = bgx->acpi_lmac_idx; + bgx->acpi_lmac_idx++; /* move to next LMAC */ out: - bgx->lmac_count++; return AE_OK; } diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c index 0e74529a42095b..30e855004c5759 100644 --- a/drivers/net/ethernet/emulex/benet/be_cmds.c +++ b/drivers/net/ethernet/emulex/benet/be_cmds.c @@ -1118,7 +1118,7 @@ int be_cmd_pmac_add(struct be_adapter *adapter, u8 *mac_addr, err: mutex_unlock(&adapter->mcc_lock); - if (status == MCC_STATUS_UNAUTHORIZED_REQUEST) + if (base_status(status) == MCC_STATUS_UNAUTHORIZED_REQUEST) status = -EPERM; return status; diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c index ec010ced6c9962..1a7f8ad7b9c611 100644 --- a/drivers/net/ethernet/emulex/benet/be_main.c +++ b/drivers/net/ethernet/emulex/benet/be_main.c @@ -318,6 +318,13 @@ static int be_mac_addr_set(struct net_device *netdev, void *p) if (ether_addr_equal(addr->sa_data, adapter->dev_mac)) return 0; + /* BE3 VFs without FILTMGMT privilege are not allowed to set its MAC + * address + */ + if (BEx_chip(adapter) && be_virtfn(adapter) && + !check_privilege(adapter, BE_PRIV_FILTMGMT)) + return -EPERM; + /* if device is not running, copy MAC to netdev->dev_addr */ if (!netif_running(netdev)) goto done; @@ -3609,7 +3616,11 @@ static void be_rx_qs_destroy(struct be_adapter *adapter) static void be_disable_if_filters(struct be_adapter *adapter) { - be_dev_mac_del(adapter, adapter->pmac_id[0]); + /* Don't delete MAC on BE3 VFs without FILTMGMT privilege */ + if (!BEx_chip(adapter) || !be_virtfn(adapter) || + check_privilege(adapter, BE_PRIV_FILTMGMT)) + be_dev_mac_del(adapter, adapter->pmac_id[0]); + be_clear_uc_list(adapter); be_clear_mc_list(adapter); @@ -3762,8 +3773,9 @@ static int be_enable_if_filters(struct be_adapter *adapter) if (status) return status; - /* For BE3 VFs, the PF programs the initial MAC address */ - if (!(BEx_chip(adapter) && be_virtfn(adapter))) { + /* Don't add MAC on BE3 VFs without FILTMGMT privilege */ + if (!BEx_chip(adapter) || !be_virtfn(adapter) || + check_privilege(adapter, BE_PRIV_FILTMGMT)) { status = be_dev_mac_add(adapter, adapter->netdev->dev_addr); if (status) return status; diff --git a/drivers/net/ethernet/mellanox/mlx4/cq.c b/drivers/net/ethernet/mellanox/mlx4/cq.c index a849da92f857e5..6b8635378f1fcb 100644 --- a/drivers/net/ethernet/mellanox/mlx4/cq.c +++ b/drivers/net/ethernet/mellanox/mlx4/cq.c @@ -101,13 +101,19 @@ void mlx4_cq_completion(struct mlx4_dev *dev, u32 cqn) { struct mlx4_cq *cq; + rcu_read_lock(); cq = radix_tree_lookup(&mlx4_priv(dev)->cq_table.tree, cqn & (dev->caps.num_cqs - 1)); + rcu_read_unlock(); + if (!cq) { mlx4_dbg(dev, "Completion event for bogus CQ %08x\n", cqn); return; } + /* Acessing the CQ outside of rcu_read_lock is safe, because + * the CQ is freed only after interrupt handling is completed. + */ ++cq->arm_sn; cq->comp(cq); @@ -118,23 +124,19 @@ void mlx4_cq_event(struct mlx4_dev *dev, u32 cqn, int event_type) struct mlx4_cq_table *cq_table = &mlx4_priv(dev)->cq_table; struct mlx4_cq *cq; - spin_lock(&cq_table->lock); - + rcu_read_lock(); cq = radix_tree_lookup(&cq_table->tree, cqn & (dev->caps.num_cqs - 1)); - if (cq) - atomic_inc(&cq->refcount); - - spin_unlock(&cq_table->lock); + rcu_read_unlock(); if (!cq) { - mlx4_warn(dev, "Async event for bogus CQ %08x\n", cqn); + mlx4_dbg(dev, "Async event for bogus CQ %08x\n", cqn); return; } + /* Acessing the CQ outside of rcu_read_lock is safe, because + * the CQ is freed only after interrupt handling is completed. + */ cq->event(cq, event_type); - - if (atomic_dec_and_test(&cq->refcount)) - complete(&cq->free); } static int mlx4_SW2HW_CQ(struct mlx4_dev *dev, struct mlx4_cmd_mailbox *mailbox, @@ -301,9 +303,9 @@ int mlx4_cq_alloc(struct mlx4_dev *dev, int nent, if (err) return err; - spin_lock_irq(&cq_table->lock); + spin_lock(&cq_table->lock); err = radix_tree_insert(&cq_table->tree, cq->cqn, cq); - spin_unlock_irq(&cq_table->lock); + spin_unlock(&cq_table->lock); if (err) goto err_icm; @@ -349,9 +351,9 @@ int mlx4_cq_alloc(struct mlx4_dev *dev, int nent, return 0; err_radix: - spin_lock_irq(&cq_table->lock); + spin_lock(&cq_table->lock); radix_tree_delete(&cq_table->tree, cq->cqn); - spin_unlock_irq(&cq_table->lock); + spin_unlock(&cq_table->lock); err_icm: mlx4_cq_free_icm(dev, cq->cqn); @@ -370,15 +372,15 @@ void mlx4_cq_free(struct mlx4_dev *dev, struct mlx4_cq *cq) if (err) mlx4_warn(dev, "HW2SW_CQ failed (%d) for CQN %06x\n", err, cq->cqn); + spin_lock(&cq_table->lock); + radix_tree_delete(&cq_table->tree, cq->cqn); + spin_unlock(&cq_table->lock); + synchronize_irq(priv->eq_table.eq[MLX4_CQ_TO_EQ_VECTOR(cq->vector)].irq); if (priv->eq_table.eq[MLX4_CQ_TO_EQ_VECTOR(cq->vector)].irq != priv->eq_table.eq[MLX4_EQ_ASYNC].irq) synchronize_irq(priv->eq_table.eq[MLX4_EQ_ASYNC].irq); - spin_lock_irq(&cq_table->lock); - radix_tree_delete(&cq_table->tree, cq->cqn); - spin_unlock_irq(&cq_table->lock); - if (atomic_dec_and_test(&cq->refcount)) complete(&cq->free); wait_for_completion(&cq->free); diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c index 4910d9af19335d..761f8b12399cab 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c @@ -1748,8 +1748,11 @@ int mlx4_en_start_port(struct net_device *dev) /* Process all completions if exist to prevent * the queues freezing if they are full */ - for (i = 0; i < priv->rx_ring_num; i++) + for (i = 0; i < priv->rx_ring_num; i++) { + local_bh_disable(); napi_schedule(&priv->rx_cq[i]->napi); + local_bh_enable(); + } netif_tx_start_all_queues(dev); netif_device_attach(dev); diff --git a/drivers/net/ethernet/mellanox/mlx4/eq.c b/drivers/net/ethernet/mellanox/mlx4/eq.c index cd3638e6fe25b2..0509996957d966 100644 --- a/drivers/net/ethernet/mellanox/mlx4/eq.c +++ b/drivers/net/ethernet/mellanox/mlx4/eq.c @@ -554,8 +554,9 @@ static int mlx4_eq_int(struct mlx4_dev *dev, struct mlx4_eq *eq) break; case MLX4_EVENT_TYPE_SRQ_LIMIT: - mlx4_dbg(dev, "%s: MLX4_EVENT_TYPE_SRQ_LIMIT\n", - __func__); + mlx4_dbg(dev, "%s: MLX4_EVENT_TYPE_SRQ_LIMIT. srq_no=0x%x, eq 0x%x\n", + __func__, be32_to_cpu(eqe->event.srq.srqn), + eq->eqn); case MLX4_EVENT_TYPE_SRQ_CATAS_ERROR: if (mlx4_is_master(dev)) { /* forward only to slave owning the SRQ */ @@ -570,15 +571,19 @@ static int mlx4_eq_int(struct mlx4_dev *dev, struct mlx4_eq *eq) eq->eqn, eq->cons_index, ret); break; } - mlx4_warn(dev, "%s: slave:%d, srq_no:0x%x, event: %02x(%02x)\n", - __func__, slave, - be32_to_cpu(eqe->event.srq.srqn), - eqe->type, eqe->subtype); + if (eqe->type == + MLX4_EVENT_TYPE_SRQ_CATAS_ERROR) + mlx4_warn(dev, "%s: slave:%d, srq_no:0x%x, event: %02x(%02x)\n", + __func__, slave, + be32_to_cpu(eqe->event.srq.srqn), + eqe->type, eqe->subtype); if (!ret && slave != dev->caps.function) { - mlx4_warn(dev, "%s: sending event %02x(%02x) to slave:%d\n", - __func__, eqe->type, - eqe->subtype, slave); + if (eqe->type == + MLX4_EVENT_TYPE_SRQ_CATAS_ERROR) + mlx4_warn(dev, "%s: sending event %02x(%02x) to slave:%d\n", + __func__, eqe->type, + eqe->subtype, slave); mlx4_slave_event(dev, slave, eqe); break; } diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c index 56185a0b827df6..1822382212eed5 100644 --- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c +++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c @@ -2980,6 +2980,9 @@ int mlx4_RST2INIT_QP_wrapper(struct mlx4_dev *dev, int slave, put_res(dev, slave, srqn, RES_SRQ); qp->srq = srq; } + + /* Save param3 for dynamic changes from VST back to VGT */ + qp->param3 = qpc->param3; put_res(dev, slave, rcqn, RES_CQ); put_res(dev, slave, mtt_base, RES_MTT); res_end_move(dev, slave, RES_QP, qpn); @@ -3772,7 +3775,6 @@ int mlx4_INIT2RTR_QP_wrapper(struct mlx4_dev *dev, int slave, int qpn = vhcr->in_modifier & 0x7fffff; struct res_qp *qp; u8 orig_sched_queue; - __be32 orig_param3 = qpc->param3; u8 orig_vlan_control = qpc->pri_path.vlan_control; u8 orig_fvl_rx = qpc->pri_path.fvl_rx; u8 orig_pri_path_fl = qpc->pri_path.fl; @@ -3814,7 +3816,6 @@ int mlx4_INIT2RTR_QP_wrapper(struct mlx4_dev *dev, int slave, */ if (!err) { qp->sched_queue = orig_sched_queue; - qp->param3 = orig_param3; qp->vlan_control = orig_vlan_control; qp->fvl_rx = orig_fvl_rx; qp->pri_path_fl = orig_pri_path_fl; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 118cea5e5489e6..46bef6a26a8cdb 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -668,9 +668,12 @@ static int mlx5e_route_lookup_ipv4(struct mlx5e_priv *priv, int ttl; #if IS_ENABLED(CONFIG_INET) + int ret; + rt = ip_route_output_key(dev_net(mirred_dev), fl4); - if (IS_ERR(rt)) - return PTR_ERR(rt); + ret = PTR_ERR_OR_ZERO(rt); + if (ret) + return ret; #else return -EOPNOTSUPP; #endif @@ -741,8 +744,8 @@ static int mlx5e_create_encap_header_ipv4(struct mlx5e_priv *priv, struct flowi4 fl4 = {}; char *encap_header; int encap_size; - __be32 saddr = 0; - int ttl = 0; + __be32 saddr; + int ttl; int err; encap_header = kzalloc(max_encap_size, GFP_KERNEL); diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h b/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h index d147ddd97997e3..0af3338bfcb4fb 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h +++ b/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h @@ -209,21 +209,21 @@ MLXSW_ITEM32(pci, eqe, owner, 0x0C, 0, 1); /* pci_eqe_cmd_token * Command completion event - token */ -MLXSW_ITEM32(pci, eqe, cmd_token, 0x08, 16, 16); +MLXSW_ITEM32(pci, eqe, cmd_token, 0x00, 16, 16); /* pci_eqe_cmd_status * Command completion event - status */ -MLXSW_ITEM32(pci, eqe, cmd_status, 0x08, 0, 8); +MLXSW_ITEM32(pci, eqe, cmd_status, 0x00, 0, 8); /* pci_eqe_cmd_out_param_h * Command completion event - output parameter - higher part */ -MLXSW_ITEM32(pci, eqe, cmd_out_param_h, 0x0C, 0, 32); +MLXSW_ITEM32(pci, eqe, cmd_out_param_h, 0x04, 0, 32); /* pci_eqe_cmd_out_param_l * Command completion event - output parameter - lower part */ -MLXSW_ITEM32(pci, eqe, cmd_out_param_l, 0x10, 0, 32); +MLXSW_ITEM32(pci, eqe, cmd_out_param_l, 0x08, 0, 32); #endif diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c index d768c7b6c6d668..003093abb1707f 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c @@ -684,6 +684,7 @@ static netdev_tx_t mlxsw_sp_port_xmit(struct sk_buff *skb, dev_kfree_skb_any(skb_orig); return NETDEV_TX_OK; } + dev_consume_skb_any(skb_orig); } if (eth_skb_pad(skb)) { diff --git a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c index 150ccf5192a989..2e88115e873597 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c +++ b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c @@ -345,6 +345,7 @@ static netdev_tx_t mlxsw_sx_port_xmit(struct sk_buff *skb, dev_kfree_skb_any(skb_orig); return NETDEV_TX_OK; } + dev_consume_skb_any(skb_orig); } mlxsw_sx_txhdr_construct(skb, &tx_info); /* TX header is consumed by HW on the way so we shouldn't count its diff --git a/drivers/net/ethernet/qualcomm/emac/emac-phy.c b/drivers/net/ethernet/qualcomm/emac/emac-phy.c index 99a14df28b9634..2851b4c5657049 100644 --- a/drivers/net/ethernet/qualcomm/emac/emac-phy.c +++ b/drivers/net/ethernet/qualcomm/emac/emac-phy.c @@ -201,6 +201,13 @@ int emac_phy_config(struct platform_device *pdev, struct emac_adapter *adpt) else adpt->phydev = mdiobus_get_phy(mii_bus, phy_addr); + /* of_phy_find_device() claims a reference to the phydev, + * so we do that here manually as well. When the driver + * later unloads, it can unilaterally drop the reference + * without worrying about ACPI vs DT. + */ + if (adpt->phydev) + get_device(&adpt->phydev->mdio.dev); } else { struct device_node *phy_np; diff --git a/drivers/net/ethernet/qualcomm/emac/emac.c b/drivers/net/ethernet/qualcomm/emac/emac.c index 422289c232bc77..f46d300bd58597 100644 --- a/drivers/net/ethernet/qualcomm/emac/emac.c +++ b/drivers/net/ethernet/qualcomm/emac/emac.c @@ -719,8 +719,7 @@ static int emac_probe(struct platform_device *pdev) err_undo_napi: netif_napi_del(&adpt->rx_q.napi); err_undo_mdiobus: - if (!has_acpi_companion(&pdev->dev)) - put_device(&adpt->phydev->mdio.dev); + put_device(&adpt->phydev->mdio.dev); mdiobus_unregister(adpt->mii_bus); err_undo_clocks: emac_clks_teardown(adpt); @@ -740,8 +739,7 @@ static int emac_remove(struct platform_device *pdev) emac_clks_teardown(adpt); - if (!has_acpi_companion(&pdev->dev)) - put_device(&adpt->phydev->mdio.dev); + put_device(&adpt->phydev->mdio.dev); mdiobus_unregister(adpt->mii_bus); free_netdev(netdev); diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index 92d7692c840dbc..89ac1e3f617599 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -926,14 +926,10 @@ static int ravb_poll(struct napi_struct *napi, int budget) /* Receive error message handling */ priv->rx_over_errors = priv->stats[RAVB_BE].rx_over_errors; priv->rx_over_errors += priv->stats[RAVB_NC].rx_over_errors; - if (priv->rx_over_errors != ndev->stats.rx_over_errors) { + if (priv->rx_over_errors != ndev->stats.rx_over_errors) ndev->stats.rx_over_errors = priv->rx_over_errors; - netif_err(priv, rx_err, ndev, "Receive Descriptor Empty\n"); - } - if (priv->rx_fifo_errors != ndev->stats.rx_fifo_errors) { + if (priv->rx_fifo_errors != ndev->stats.rx_fifo_errors) ndev->stats.rx_fifo_errors = priv->rx_fifo_errors; - netif_err(priv, rx_err, ndev, "Receive FIFO Overflow\n"); - } out: return budget - quota; } @@ -1508,6 +1504,19 @@ static netdev_tx_t ravb_start_xmit(struct sk_buff *skb, struct net_device *ndev) buffer = PTR_ALIGN(priv->tx_align[q], DPTR_ALIGN) + entry / NUM_TX_DESC * DPTR_ALIGN; len = PTR_ALIGN(skb->data, DPTR_ALIGN) - skb->data; + /* Zero length DMA descriptors are problematic as they seem to + * terminate DMA transfers. Avoid them by simply using a length of + * DPTR_ALIGN (4) when skb data is aligned to DPTR_ALIGN. + * + * As skb is guaranteed to have at least ETH_ZLEN (60) bytes of + * data by the call to skb_put_padto() above this is safe with + * respect to both the length of the first DMA descriptor (len) + * overflowing the available data and the length of the second DMA + * descriptor (skb->len - len) being negative. + */ + if (len == 0) + len = DPTR_ALIGN; + memcpy(buffer, skb->data, len); dma_addr = dma_map_single(ndev->dev.parent, buffer, len, DMA_TO_DEVICE); if (dma_mapping_error(ndev->dev.parent, dma_addr)) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index a276a32d57f243..e3f6389e1b01c5 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -3326,9 +3326,9 @@ int stmmac_dvr_probe(struct device *device, (priv->plat->maxmtu >= ndev->min_mtu)) ndev->max_mtu = priv->plat->maxmtu; else if (priv->plat->maxmtu < ndev->min_mtu) - netdev_warn(priv->dev, - "%s: warning: maxmtu having invalid value (%d)\n", - __func__, priv->plat->maxmtu); + dev_warn(priv->device, + "%s: warning: maxmtu having invalid value (%d)\n", + __func__, priv->plat->maxmtu); if (flow_ctrl) priv->flow_ctrl = FLOW_AUTO; /* RX/TX pause on */ @@ -3340,7 +3340,8 @@ int stmmac_dvr_probe(struct device *device, */ if ((priv->synopsys_id >= DWMAC_CORE_3_50) && (!priv->plat->riwt_off)) { priv->use_riwt = 1; - netdev_info(priv->dev, "Enable RX Mitigation via HW Watchdog Timer\n"); + dev_info(priv->device, + "Enable RX Mitigation via HW Watchdog Timer\n"); } netif_napi_add(ndev, &priv->napi, stmmac_poll, 64); @@ -3366,17 +3367,17 @@ int stmmac_dvr_probe(struct device *device, /* MDIO bus Registration */ ret = stmmac_mdio_register(ndev); if (ret < 0) { - netdev_err(priv->dev, - "%s: MDIO bus (id: %d) registration failed", - __func__, priv->plat->bus_id); + dev_err(priv->device, + "%s: MDIO bus (id: %d) registration failed", + __func__, priv->plat->bus_id); goto error_mdio_register; } } ret = register_netdev(ndev); if (ret) { - netdev_err(priv->dev, "%s: ERROR %i registering the device\n", - __func__, ret); + dev_err(priv->device, "%s: ERROR %i registering the device\n", + __func__, ret); goto error_netdev_register; } diff --git a/drivers/net/ethernet/ti/cpmac.c b/drivers/net/ethernet/ti/cpmac.c index 77c88fcf2b86f6..9b8a30bf939bf9 100644 --- a/drivers/net/ethernet/ti/cpmac.c +++ b/drivers/net/ethernet/ti/cpmac.c @@ -1210,7 +1210,7 @@ int cpmac_init(void) goto fail_alloc; } -#warning FIXME: unhardcode gpio&reset bits + /* FIXME: unhardcode gpio&reset bits */ ar7_gpio_disable(26); ar7_gpio_disable(27); ar7_device_reset(AR7_RESET_BIT_CPMAC_LO); diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c index c9414c05485266..fcab8019dda08a 100644 --- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -659,6 +659,7 @@ int netvsc_recv_callback(struct hv_device *device_obj, * policy filters on the host). Deliver these via the VF * interface in the guest. */ + rcu_read_lock(); vf_netdev = rcu_dereference(net_device_ctx->vf_netdev); if (vf_netdev && (vf_netdev->flags & IFF_UP)) net = vf_netdev; @@ -667,6 +668,7 @@ int netvsc_recv_callback(struct hv_device *device_obj, skb = netvsc_alloc_recv_skb(net, packet, csum_info, *data, vlan_tci); if (unlikely(!skb)) { ++net->stats.rx_dropped; + rcu_read_unlock(); return NVSP_STAT_FAIL; } @@ -696,6 +698,7 @@ int netvsc_recv_callback(struct hv_device *device_obj, * TODO - use NAPI? */ netif_rx(skb); + rcu_read_unlock(); return 0; } diff --git a/drivers/net/ieee802154/at86rf230.c b/drivers/net/ieee802154/at86rf230.c index 46d53a6c8cf868..76ba7ecfe14290 100644 --- a/drivers/net/ieee802154/at86rf230.c +++ b/drivers/net/ieee802154/at86rf230.c @@ -1715,9 +1715,9 @@ static int at86rf230_probe(struct spi_device *spi) /* Reset */ if (gpio_is_valid(rstn)) { udelay(1); - gpio_set_value(rstn, 0); + gpio_set_value_cansleep(rstn, 0); udelay(1); - gpio_set_value(rstn, 1); + gpio_set_value_cansleep(rstn, 1); usleep_range(120, 240); } diff --git a/drivers/net/ieee802154/atusb.c b/drivers/net/ieee802154/atusb.c index 1253f864737ae3..ef688518ad77d7 100644 --- a/drivers/net/ieee802154/atusb.c +++ b/drivers/net/ieee802154/atusb.c @@ -117,13 +117,26 @@ static int atusb_read_reg(struct atusb *atusb, uint8_t reg) { struct usb_device *usb_dev = atusb->usb_dev; int ret; + uint8_t *buffer; uint8_t value; + buffer = kmalloc(1, GFP_KERNEL); + if (!buffer) + return -ENOMEM; + dev_dbg(&usb_dev->dev, "atusb: reg = 0x%x\n", reg); ret = atusb_control_msg(atusb, usb_rcvctrlpipe(usb_dev, 0), ATUSB_REG_READ, ATUSB_REQ_FROM_DEV, - 0, reg, &value, 1, 1000); - return ret >= 0 ? value : ret; + 0, reg, buffer, 1, 1000); + + if (ret >= 0) { + value = buffer[0]; + kfree(buffer); + return value; + } else { + kfree(buffer); + return ret; + } } static int atusb_write_subreg(struct atusb *atusb, uint8_t reg, uint8_t mask, @@ -549,13 +562,6 @@ static int atusb_set_frame_retries(struct ieee802154_hw *hw, s8 retries) { struct atusb *atusb = hw->priv; - struct device *dev = &atusb->usb_dev->dev; - - if (atusb->fw_ver_maj == 0 && atusb->fw_ver_min < 3) { - dev_info(dev, "Automatic frame retransmission is only available from " - "firmware version 0.3. Please update if you want this feature."); - return -EINVAL; - } return atusb_write_subreg(atusb, SR_MAX_FRAME_RETRIES, retries); } @@ -608,9 +614,13 @@ static const struct ieee802154_ops atusb_ops = { static int atusb_get_and_show_revision(struct atusb *atusb) { struct usb_device *usb_dev = atusb->usb_dev; - unsigned char buffer[3]; + unsigned char *buffer; int ret; + buffer = kmalloc(3, GFP_KERNEL); + if (!buffer) + return -ENOMEM; + /* Get a couple of the ATMega Firmware values */ ret = atusb_control_msg(atusb, usb_rcvctrlpipe(usb_dev, 0), ATUSB_ID, ATUSB_REQ_FROM_DEV, 0, 0, @@ -631,15 +641,20 @@ static int atusb_get_and_show_revision(struct atusb *atusb) dev_info(&usb_dev->dev, "Please update to version 0.2 or newer"); } + kfree(buffer); return ret; } static int atusb_get_and_show_build(struct atusb *atusb) { struct usb_device *usb_dev = atusb->usb_dev; - char build[ATUSB_BUILD_SIZE + 1]; + char *build; int ret; + build = kmalloc(ATUSB_BUILD_SIZE + 1, GFP_KERNEL); + if (!build) + return -ENOMEM; + ret = atusb_control_msg(atusb, usb_rcvctrlpipe(usb_dev, 0), ATUSB_BUILD, ATUSB_REQ_FROM_DEV, 0, 0, build, ATUSB_BUILD_SIZE, 1000); @@ -648,6 +663,7 @@ static int atusb_get_and_show_build(struct atusb *atusb) dev_info(&usb_dev->dev, "Firmware: build %s\n", build); } + kfree(build); return ret; } @@ -698,7 +714,7 @@ static int atusb_get_and_show_chip(struct atusb *atusb) static int atusb_set_extended_addr(struct atusb *atusb) { struct usb_device *usb_dev = atusb->usb_dev; - unsigned char buffer[IEEE802154_EXTENDED_ADDR_LEN]; + unsigned char *buffer; __le64 extended_addr; u64 addr; int ret; @@ -710,12 +726,20 @@ static int atusb_set_extended_addr(struct atusb *atusb) return 0; } + buffer = kmalloc(IEEE802154_EXTENDED_ADDR_LEN, GFP_KERNEL); + if (!buffer) + return -ENOMEM; + /* Firmware is new enough so we fetch the address from EEPROM */ ret = atusb_control_msg(atusb, usb_rcvctrlpipe(usb_dev, 0), ATUSB_EUI64_READ, ATUSB_REQ_FROM_DEV, 0, 0, buffer, IEEE802154_EXTENDED_ADDR_LEN, 1000); - if (ret < 0) - dev_err(&usb_dev->dev, "failed to fetch extended address\n"); + if (ret < 0) { + dev_err(&usb_dev->dev, "failed to fetch extended address, random address set\n"); + ieee802154_random_extended_addr(&atusb->hw->phy->perm_extended_addr); + kfree(buffer); + return ret; + } memcpy(&extended_addr, buffer, IEEE802154_EXTENDED_ADDR_LEN); /* Check if read address is not empty and the unicast bit is set correctly */ @@ -729,6 +753,7 @@ static int atusb_set_extended_addr(struct atusb *atusb) &addr); } + kfree(buffer); return ret; } @@ -770,8 +795,7 @@ static int atusb_probe(struct usb_interface *interface, hw->parent = &usb_dev->dev; hw->flags = IEEE802154_HW_TX_OMIT_CKSUM | IEEE802154_HW_AFILT | - IEEE802154_HW_PROMISCUOUS | IEEE802154_HW_CSMA_PARAMS | - IEEE802154_HW_FRAME_RETRIES; + IEEE802154_HW_PROMISCUOUS | IEEE802154_HW_CSMA_PARAMS; hw->phy->flags = WPAN_PHY_FLAG_TXPOWER | WPAN_PHY_FLAG_CCA_ED_LEVEL | WPAN_PHY_FLAG_CCA_MODE; @@ -800,6 +824,9 @@ static int atusb_probe(struct usb_interface *interface, atusb_get_and_show_build(atusb); atusb_set_extended_addr(atusb); + if (atusb->fw_ver_maj >= 0 && atusb->fw_ver_min >= 3) + hw->flags |= IEEE802154_HW_FRAME_RETRIES; + ret = atusb_get_and_clear_error(atusb); if (ret) { dev_err(&atusb->usb_dev->dev, diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c index e84ae084e259c9..ca1b462bf7b278 100644 --- a/drivers/net/phy/dp83867.c +++ b/drivers/net/phy/dp83867.c @@ -132,12 +132,16 @@ static int dp83867_of_init(struct phy_device *phydev) ret = of_property_read_u32(of_node, "ti,rx-internal-delay", &dp83867->rx_id_delay); - if (ret) + if (ret && + (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID || + phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID)) return ret; ret = of_property_read_u32(of_node, "ti,tx-internal-delay", &dp83867->tx_id_delay); - if (ret) + if (ret && + (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID || + phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID)) return ret; return of_property_read_u32(of_node, "ti,fifo-depth", diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c index be418563cb18c6..f3b48ad90865d0 100644 --- a/drivers/net/usb/r8152.c +++ b/drivers/net/usb/r8152.c @@ -1730,7 +1730,7 @@ static u8 r8152_rx_csum(struct r8152 *tp, struct rx_desc *rx_desc) u8 checksum = CHECKSUM_NONE; u32 opts2, opts3; - if (tp->version == RTL_VER_01 || tp->version == RTL_VER_02) + if (!(tp->netdev->features & NETIF_F_RXCSUM)) goto return_result; opts2 = le32_to_cpu(rx_desc->opts2); @@ -4356,6 +4356,11 @@ static int rtl8152_probe(struct usb_interface *intf, NETIF_F_HIGHDMA | NETIF_F_FRAGLIST | NETIF_F_IPV6_CSUM | NETIF_F_TSO6; + if (tp->version == RTL_VER_01) { + netdev->features &= ~NETIF_F_RXCSUM; + netdev->hw_features &= ~NETIF_F_RXCSUM; + } + netdev->ethtool_ops = &ops; netif_set_gso_max_size(netdev, RTL_LIMITED_TSO_SIZE); diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c index bb70dd5723b587..ca7196c400609b 100644 --- a/drivers/net/vxlan.c +++ b/drivers/net/vxlan.c @@ -1798,7 +1798,7 @@ static int vxlan_build_skb(struct sk_buff *skb, struct dst_entry *dst, static struct rtable *vxlan_get_route(struct vxlan_dev *vxlan, struct net_device *dev, struct vxlan_sock *sock4, struct sk_buff *skb, int oif, u8 tos, - __be32 daddr, __be32 *saddr, + __be32 daddr, __be32 *saddr, __be16 dport, __be16 sport, struct dst_cache *dst_cache, const struct ip_tunnel_info *info) { @@ -1824,6 +1824,8 @@ static struct rtable *vxlan_get_route(struct vxlan_dev *vxlan, struct net_device fl4.flowi4_proto = IPPROTO_UDP; fl4.daddr = daddr; fl4.saddr = *saddr; + fl4.fl4_dport = dport; + fl4.fl4_sport = sport; rt = ip_route_output_key(vxlan->net, &fl4); if (likely(!IS_ERR(rt))) { @@ -1851,6 +1853,7 @@ static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan, __be32 label, const struct in6_addr *daddr, struct in6_addr *saddr, + __be16 dport, __be16 sport, struct dst_cache *dst_cache, const struct ip_tunnel_info *info) { @@ -1877,6 +1880,8 @@ static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan, fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tos), label); fl6.flowi6_mark = skb->mark; fl6.flowi6_proto = IPPROTO_UDP; + fl6.fl6_dport = dport; + fl6.fl6_sport = sport; err = ipv6_stub->ipv6_dst_lookup(vxlan->net, sock6->sock->sk, @@ -2068,6 +2073,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev, rdst ? rdst->remote_ifindex : 0, tos, dst->sin.sin_addr.s_addr, &src->sin.sin_addr.s_addr, + dst_port, src_port, dst_cache, info); if (IS_ERR(rt)) { err = PTR_ERR(rt); @@ -2104,6 +2110,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev, rdst ? rdst->remote_ifindex : 0, tos, label, &dst->sin6.sin6_addr, &src->sin6.sin6_addr, + dst_port, src_port, dst_cache, info); if (IS_ERR(ndst)) { err = PTR_ERR(ndst); @@ -2430,7 +2437,7 @@ static int vxlan_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb) rt = vxlan_get_route(vxlan, dev, sock4, skb, 0, info->key.tos, info->key.u.ipv4.dst, - &info->key.u.ipv4.src, NULL, info); + &info->key.u.ipv4.src, dport, sport, NULL, info); if (IS_ERR(rt)) return PTR_ERR(rt); ip_rt_put(rt); @@ -2441,7 +2448,7 @@ static int vxlan_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb) ndst = vxlan6_get_route(vxlan, dev, sock6, skb, 0, info->key.tos, info->key.label, &info->key.u.ipv6.dst, - &info->key.u.ipv6.src, NULL, info); + &info->key.u.ipv6.src, dport, sport, NULL, info); if (IS_ERR(ndst)) return PTR_ERR(ndst); dst_release(ndst); diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 6dcbc5defb7a8d..ecc151697fd4bd 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -38,7 +38,6 @@ #include #include #include -#include #include #include #include @@ -1083,7 +1082,8 @@ int nfs4_call_sync(struct rpc_clnt *clnt, return nfs4_call_sync_sequence(clnt, server, msg, args, res); } -static void update_changeattr(struct inode *dir, struct nfs4_change_info *cinfo) +static void update_changeattr(struct inode *dir, struct nfs4_change_info *cinfo, + unsigned long timestamp) { struct nfs_inode *nfsi = NFS_I(dir); @@ -1099,6 +1099,7 @@ static void update_changeattr(struct inode *dir, struct nfs4_change_info *cinfo) NFS_INO_INVALID_ACL; } dir->i_version = cinfo->after; + nfsi->read_cache_jiffies = timestamp; nfsi->attr_gencount = nfs_inc_attr_generation_counter(); nfs_fscache_invalidate(dir); spin_unlock(&dir->i_lock); @@ -2391,11 +2392,13 @@ static int _nfs4_proc_open(struct nfs4_opendata *data) nfs_fattr_map_and_free_names(server, &data->f_attr); if (o_arg->open_flags & O_CREAT) { - update_changeattr(dir, &o_res->cinfo); if (o_arg->open_flags & O_EXCL) data->file_created = 1; else if (o_res->cinfo.before != o_res->cinfo.after) data->file_created = 1; + if (data->file_created || dir->i_version != o_res->cinfo.after) + update_changeattr(dir, &o_res->cinfo, + o_res->f_attr->time_start); } if ((o_res->rflags & NFS4_OPEN_RESULT_LOCKTYPE_POSIX) == 0) server->caps &= ~NFS_CAP_POSIX_LOCK; @@ -4073,11 +4076,12 @@ static int _nfs4_proc_remove(struct inode *dir, const struct qstr *name) .rpc_argp = &args, .rpc_resp = &res, }; + unsigned long timestamp = jiffies; int status; status = nfs4_call_sync(server->client, server, &msg, &args.seq_args, &res.seq_res, 1); if (status == 0) - update_changeattr(dir, &res.cinfo); + update_changeattr(dir, &res.cinfo, timestamp); return status; } @@ -4125,7 +4129,8 @@ static int nfs4_proc_unlink_done(struct rpc_task *task, struct inode *dir) if (nfs4_async_handle_error(task, res->server, NULL, &data->timeout) == -EAGAIN) return 0; - update_changeattr(dir, &res->cinfo); + if (task->tk_status == 0) + update_changeattr(dir, &res->cinfo, res->dir_attr->time_start); return 1; } @@ -4159,8 +4164,11 @@ static int nfs4_proc_rename_done(struct rpc_task *task, struct inode *old_dir, if (nfs4_async_handle_error(task, res->server, NULL, &data->timeout) == -EAGAIN) return 0; - update_changeattr(old_dir, &res->old_cinfo); - update_changeattr(new_dir, &res->new_cinfo); + if (task->tk_status == 0) { + update_changeattr(old_dir, &res->old_cinfo, res->old_fattr->time_start); + if (new_dir != old_dir) + update_changeattr(new_dir, &res->new_cinfo, res->new_fattr->time_start); + } return 1; } @@ -4197,7 +4205,7 @@ static int _nfs4_proc_link(struct inode *inode, struct inode *dir, const struct status = nfs4_call_sync(server->client, server, &msg, &arg.seq_args, &res.seq_res, 1); if (!status) { - update_changeattr(dir, &res.cinfo); + update_changeattr(dir, &res.cinfo, res.fattr->time_start); status = nfs_post_op_update_inode(inode, res.fattr); if (!status) nfs_setsecurity(inode, res.fattr, res.label); @@ -4272,7 +4280,8 @@ static int nfs4_do_create(struct inode *dir, struct dentry *dentry, struct nfs4_ int status = nfs4_call_sync(NFS_SERVER(dir)->client, NFS_SERVER(dir), &data->msg, &data->arg.seq_args, &data->res.seq_res, 1); if (status == 0) { - update_changeattr(dir, &data->res.dir_cinfo); + update_changeattr(dir, &data->res.dir_cinfo, + data->res.fattr->time_start); status = nfs_instantiate(dentry, data->res.fh, data->res.fattr, data->res.label); } return status; @@ -6127,7 +6136,6 @@ static struct nfs4_lockdata *nfs4_alloc_lockdata(struct file_lock *fl, p->server = server; atomic_inc(&lsp->ls_count); p->ctx = get_nfs_open_context(ctx); - get_file(fl->fl_file); memcpy(&p->fl, fl, sizeof(p->fl)); return p; out_free_seqid: @@ -6240,7 +6248,6 @@ static void nfs4_lock_release(void *calldata) nfs_free_seqid(data->arg.lock_seqid); nfs4_put_lock_state(data->lsp); put_nfs_open_context(data->ctx); - fput(data->fl.fl_file); kfree(data); dprintk("%s: done!\n", __func__); } diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c index 1d152f4470cd6f..90e6193ce6bed3 100644 --- a/fs/nfs/nfs4state.c +++ b/fs/nfs/nfs4state.c @@ -1729,7 +1729,6 @@ static int nfs4_recovery_handle_error(struct nfs_client *clp, int error) break; case -NFS4ERR_STALE_CLIENTID: set_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state); - nfs4_state_clear_reclaim_reboot(clp); nfs4_state_start_reclaim_reboot(clp); break; case -NFS4ERR_EXPIRED: diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c index 7ecf16be4a444e..8fae53ce21d16c 100644 --- a/fs/nfsd/nfs4xdr.c +++ b/fs/nfsd/nfs4xdr.c @@ -2440,7 +2440,9 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp, p++; /* to be backfilled later */ if (bmval0 & FATTR4_WORD0_SUPPORTED_ATTRS) { - u32 *supp = nfsd_suppattrs[minorversion]; + u32 supp[3]; + + memcpy(supp, nfsd_suppattrs[minorversion], sizeof(supp)); if (!IS_POSIXACL(dentry->d_inode)) supp[0] &= ~FATTR4_WORD0_ACL; diff --git a/include/linux/bpf.h b/include/linux/bpf.h index f74ae68086dc69..05cf951df3fedd 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -216,7 +216,7 @@ u64 bpf_tail_call(u64 ctx, u64 r2, u64 index, u64 r4, u64 r5); u64 bpf_get_stackid(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5); bool bpf_prog_array_compatible(struct bpf_array *array, const struct bpf_prog *fp); -int bpf_prog_calc_digest(struct bpf_prog *fp); +int bpf_prog_calc_tag(struct bpf_prog *fp); const struct bpf_func_proto *bpf_get_trace_printk_proto(void); diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 20bfefbe759416..d936a0021839cc 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -74,6 +74,8 @@ enum cpuhp_state { CPUHP_ZCOMP_PREPARE, CPUHP_TIMERS_DEAD, CPUHP_MIPS_SOC_PREPARE, + CPUHP_BP_PREPARE_DYN, + CPUHP_BP_PREPARE_DYN_END = CPUHP_BP_PREPARE_DYN + 20, CPUHP_BRINGUP_CPU, CPUHP_AP_IDLE_DEAD, CPUHP_AP_OFFLINE, diff --git a/include/linux/filter.h b/include/linux/filter.h index a0934e6c9babf8..e4eb2546339afb 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -57,6 +57,8 @@ struct bpf_prog_aux; /* BPF program can access up to 512 bytes of stack space. */ #define MAX_BPF_STACK 512 +#define BPF_TAG_SIZE 8 + /* Helper macros for filter block array initializers. */ /* ALU ops on registers, bpf_add|sub|...: dst_reg += src_reg */ @@ -408,7 +410,7 @@ struct bpf_prog { kmemcheck_bitfield_end(meta); enum bpf_prog_type type; /* Type of BPF program */ u32 len; /* Number of filter blocks */ - u32 digest[SHA_DIGEST_WORDS]; /* Program digest */ + u8 tag[BPF_TAG_SIZE]; struct bpf_prog_aux *aux; /* Auxiliary fields */ struct sock_fprog_kern *orig_prog; /* Original BPF program */ unsigned int (*bpf_func)(const void *ctx, @@ -519,7 +521,7 @@ static inline u32 bpf_prog_insn_size(const struct bpf_prog *prog) return prog->len * sizeof(struct bpf_insn); } -static inline u32 bpf_prog_digest_scratch_size(const struct bpf_prog *prog) +static inline u32 bpf_prog_tag_scratch_size(const struct bpf_prog *prog) { return round_up(bpf_prog_insn_size(prog) + sizeof(__be64) + 1, SHA_MESSAGE_BYTES); diff --git a/include/linux/kernel.h b/include/linux/kernel.h index 56aec84237ad5b..cb09238f6d32be 100644 --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@ -514,8 +514,8 @@ extern enum system_states { #define TAINT_FLAGS_COUNT 16 struct taint_flag { - char true; /* character printed when tainted */ - char false; /* character printed when not tainted */ + char c_true; /* character printed when tainted */ + char c_false; /* character printed when not tainted */ bool module; /* also show as a per-module taint flag */ }; diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 321f9ed552a995..01f71e1d2e941e 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -444,6 +444,10 @@ bool __rcu_is_watching(void); #error "Unknown RCU implementation specified to kernel configuration" #endif +#define RCU_SCHEDULER_INACTIVE 0 +#define RCU_SCHEDULER_INIT 1 +#define RCU_SCHEDULER_RUNNING 2 + /* * init_rcu_head_on_stack()/destroy_rcu_head_on_stack() are needed for dynamic * initialization and destruction of rcu_head on the stack. rcu_head structures diff --git a/include/linux/sunrpc/svc_xprt.h b/include/linux/sunrpc/svc_xprt.h index e5d19344037491..7440290f64acd3 100644 --- a/include/linux/sunrpc/svc_xprt.h +++ b/include/linux/sunrpc/svc_xprt.h @@ -66,6 +66,7 @@ struct svc_xprt { #define XPT_LISTENER 10 /* listening endpoint */ #define XPT_CACHE_AUTH 11 /* cache auth info */ #define XPT_LOCAL 12 /* connection from loopback interface */ +#define XPT_KILL_TEMP 13 /* call xpo_kill_temp_xprt before closing */ struct svc_serv *xpt_server; /* service for transport */ atomic_t xpt_reserved; /* space on outq that is rsvd */ diff --git a/include/linux/tcp.h b/include/linux/tcp.h index fc5848dad7a432..c93f4b3a59cb7a 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -62,8 +62,13 @@ static inline unsigned int tcp_optlen(const struct sk_buff *skb) /* TCP Fast Open Cookie as stored in memory */ struct tcp_fastopen_cookie { + union { + u8 val[TCP_FASTOPEN_COOKIE_MAX]; +#if IS_ENABLED(CONFIG_IPV6) + struct in6_addr addr; +#endif + }; s8 len; - u8 val[TCP_FASTOPEN_COOKIE_MAX]; bool exp; /* In RFC6994 experimental option format */ }; diff --git a/include/uapi/linux/nl80211.h b/include/uapi/linux/nl80211.h index 6b76e3b0c18eac..bea982af9cfb80 100644 --- a/include/uapi/linux/nl80211.h +++ b/include/uapi/linux/nl80211.h @@ -1772,7 +1772,9 @@ enum nl80211_commands { * * @NL80211_ATTR_OPMODE_NOTIF: Operating mode field from Operating Mode * Notification Element based on association request when used with - * %NL80211_CMD_NEW_STATION; u8 attribute. + * %NL80211_CMD_NEW_STATION or %NL80211_CMD_SET_STATION (only when + * %NL80211_FEATURE_FULL_AP_CLIENT_STATE is supported, or with TDLS); + * u8 attribute. * * @NL80211_ATTR_VENDOR_ID: The vendor ID, either a 24-bit OUI or, if * %NL80211_VENDOR_ID_IS_LINUX is set, a special Linux ID (not used yet) diff --git a/include/uapi/linux/pkt_cls.h b/include/uapi/linux/pkt_cls.h index cb4bcdc5854372..a4dcd88ec27186 100644 --- a/include/uapi/linux/pkt_cls.h +++ b/include/uapi/linux/pkt_cls.h @@ -397,7 +397,7 @@ enum { TCA_BPF_NAME, TCA_BPF_FLAGS, TCA_BPF_FLAGS_GEN, - TCA_BPF_DIGEST, + TCA_BPF_TAG, __TCA_BPF_MAX, }; diff --git a/include/uapi/linux/tc_act/tc_bpf.h b/include/uapi/linux/tc_act/tc_bpf.h index a6b88a6f7f712b..975b50dc8d1d46 100644 --- a/include/uapi/linux/tc_act/tc_bpf.h +++ b/include/uapi/linux/tc_act/tc_bpf.h @@ -27,7 +27,7 @@ enum { TCA_ACT_BPF_FD, TCA_ACT_BPF_NAME, TCA_ACT_BPF_PAD, - TCA_ACT_BPF_DIGEST, + TCA_ACT_BPF_TAG, __TCA_ACT_BPF_MAX, }; #define TCA_ACT_BPF_MAX (__TCA_ACT_BPF_MAX - 1) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 1eb4f130375616..503d4211988afe 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -146,10 +146,11 @@ void __bpf_prog_free(struct bpf_prog *fp) vfree(fp); } -int bpf_prog_calc_digest(struct bpf_prog *fp) +int bpf_prog_calc_tag(struct bpf_prog *fp) { const u32 bits_offset = SHA_MESSAGE_BYTES - sizeof(__be64); - u32 raw_size = bpf_prog_digest_scratch_size(fp); + u32 raw_size = bpf_prog_tag_scratch_size(fp); + u32 digest[SHA_DIGEST_WORDS]; u32 ws[SHA_WORKSPACE_WORDS]; u32 i, bsize, psize, blocks; struct bpf_insn *dst; @@ -162,7 +163,7 @@ int bpf_prog_calc_digest(struct bpf_prog *fp) if (!raw) return -ENOMEM; - sha_init(fp->digest); + sha_init(digest); memset(ws, 0, sizeof(ws)); /* We need to take out the map fd for the digest calculation @@ -204,13 +205,14 @@ int bpf_prog_calc_digest(struct bpf_prog *fp) *bits = cpu_to_be64((psize - 1) << 3); while (blocks--) { - sha_transform(fp->digest, todo, ws); + sha_transform(digest, todo, ws); todo += SHA_MESSAGE_BYTES; } - result = (__force __be32 *)fp->digest; + result = (__force __be32 *)digest; for (i = 0; i < SHA_DIGEST_WORDS; i++) - result[i] = cpu_to_be32(fp->digest[i]); + result[i] = cpu_to_be32(digest[i]); + memcpy(fp->tag, result, sizeof(fp->tag)); vfree(raw); return 0; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index e89acea22ecfc9..1d6b29e4e2c35e 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -688,17 +688,17 @@ static int bpf_prog_release(struct inode *inode, struct file *filp) static void bpf_prog_show_fdinfo(struct seq_file *m, struct file *filp) { const struct bpf_prog *prog = filp->private_data; - char prog_digest[sizeof(prog->digest) * 2 + 1] = { }; + char prog_tag[sizeof(prog->tag) * 2 + 1] = { }; - bin2hex(prog_digest, prog->digest, sizeof(prog->digest)); + bin2hex(prog_tag, prog->tag, sizeof(prog->tag)); seq_printf(m, "prog_type:\t%u\n" "prog_jited:\t%u\n" - "prog_digest:\t%s\n" + "prog_tag:\t%s\n" "memlock:\t%llu\n", prog->type, prog->jited, - prog_digest, + prog_tag, prog->pages * 1ULL << PAGE_SHIFT); } #endif diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 83ed2f8f6f228b..cdc43b899f281e 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2936,7 +2936,7 @@ static int replace_map_fd_with_map_ptr(struct bpf_verifier_env *env) int insn_cnt = env->prog->len; int i, j, err; - err = bpf_prog_calc_digest(env->prog); + err = bpf_prog_calc_tag(env->prog); if (err) return err; diff --git a/kernel/cpu.c b/kernel/cpu.c index f75c4d031eeb21..c47506357519d8 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -1302,10 +1302,24 @@ static int cpuhp_cb_check(enum cpuhp_state state) */ static int cpuhp_reserve_state(enum cpuhp_state state) { - enum cpuhp_state i; + enum cpuhp_state i, end; + struct cpuhp_step *step; - for (i = CPUHP_AP_ONLINE_DYN; i <= CPUHP_AP_ONLINE_DYN_END; i++) { - if (!cpuhp_ap_states[i].name) + switch (state) { + case CPUHP_AP_ONLINE_DYN: + step = cpuhp_ap_states + CPUHP_AP_ONLINE_DYN; + end = CPUHP_AP_ONLINE_DYN_END; + break; + case CPUHP_BP_PREPARE_DYN: + step = cpuhp_bp_states + CPUHP_BP_PREPARE_DYN; + end = CPUHP_BP_PREPARE_DYN_END; + break; + default: + return -EINVAL; + } + + for (i = state; i <= end; i++, step++) { + if (!step->name) return i; } WARN(1, "No more dynamic states available for CPU hotplug\n"); @@ -1323,7 +1337,7 @@ static int cpuhp_store_callbacks(enum cpuhp_state state, const char *name, mutex_lock(&cpuhp_state_mutex); - if (state == CPUHP_AP_ONLINE_DYN) { + if (state == CPUHP_AP_ONLINE_DYN || state == CPUHP_BP_PREPARE_DYN) { ret = cpuhp_reserve_state(state); if (ret < 0) goto out; diff --git a/kernel/module.c b/kernel/module.c index 5088784c0cf9e9..38d4270925d4d1 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -1145,7 +1145,7 @@ static size_t module_flags_taint(struct module *mod, char *buf) for (i = 0; i < TAINT_FLAGS_COUNT; i++) { if (taint_flags[i].module && test_bit(i, &mod->taints)) - buf[l++] = taint_flags[i].true; + buf[l++] = taint_flags[i].c_true; } return l; diff --git a/kernel/panic.c b/kernel/panic.c index c51edaa04fce38..901c4fb46002e3 100644 --- a/kernel/panic.c +++ b/kernel/panic.c @@ -355,7 +355,7 @@ const char *print_tainted(void) for (i = 0; i < TAINT_FLAGS_COUNT; i++) { const struct taint_flag *t = &taint_flags[i]; *s++ = test_bit(i, &tainted_mask) ? - t->true : t->false; + t->c_true : t->c_false; } *s = 0; } else diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index 80adef7d4c3d01..0d6ff3e471be6c 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h @@ -136,6 +136,7 @@ int rcu_jiffies_till_stall_check(void); #define TPS(x) tracepoint_string(x) void rcu_early_boot_tests(void); +void rcu_test_sync_prims(void); /* * This function really isn't for public consumption, but RCU is special in diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c index 1898559e6b60dd..b23a4d076f3d2c 100644 --- a/kernel/rcu/tiny.c +++ b/kernel/rcu/tiny.c @@ -185,9 +185,6 @@ static __latent_entropy void rcu_process_callbacks(struct softirq_action *unused * benefits of doing might_sleep() to reduce latency.) * * Cool, huh? (Due to Josh Triplett.) - * - * But we want to make this a static inline later. The cond_resched() - * currently makes this problematic. */ void synchronize_sched(void) { @@ -195,7 +192,6 @@ void synchronize_sched(void) lock_is_held(&rcu_lock_map) || lock_is_held(&rcu_sched_lock_map), "Illegal synchronize_sched() in RCU read-side critical section"); - cond_resched(); } EXPORT_SYMBOL_GPL(synchronize_sched); diff --git a/kernel/rcu/tiny_plugin.h b/kernel/rcu/tiny_plugin.h index 196f0302e2f432..c64b827ecbca19 100644 --- a/kernel/rcu/tiny_plugin.h +++ b/kernel/rcu/tiny_plugin.h @@ -60,12 +60,17 @@ EXPORT_SYMBOL_GPL(rcu_scheduler_active); /* * During boot, we forgive RCU lockdep issues. After this function is - * invoked, we start taking RCU lockdep issues seriously. + * invoked, we start taking RCU lockdep issues seriously. Note that unlike + * Tree RCU, Tiny RCU transitions directly from RCU_SCHEDULER_INACTIVE + * to RCU_SCHEDULER_RUNNING, skipping the RCU_SCHEDULER_INIT stage. + * The reason for this is that Tiny RCU does not need kthreads, so does + * not have to care about the fact that the scheduler is half-initialized + * at a certain phase of the boot process. */ void __init rcu_scheduler_starting(void) { WARN_ON(nr_context_switches() > 0); - rcu_scheduler_active = 1; + rcu_scheduler_active = RCU_SCHEDULER_RUNNING; } #endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 96c52e43f7cac0..cb4e2056ccf3cf 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -127,13 +127,16 @@ int rcu_num_nodes __read_mostly = NUM_RCU_NODES; /* Total # rcu_nodes in use. */ int sysctl_panic_on_rcu_stall __read_mostly; /* - * The rcu_scheduler_active variable transitions from zero to one just - * before the first task is spawned. So when this variable is zero, RCU - * can assume that there is but one task, allowing RCU to (for example) + * The rcu_scheduler_active variable is initialized to the value + * RCU_SCHEDULER_INACTIVE and transitions RCU_SCHEDULER_INIT just before the + * first task is spawned. So when this variable is RCU_SCHEDULER_INACTIVE, + * RCU can assume that there is but one task, allowing RCU to (for example) * optimize synchronize_rcu() to a simple barrier(). When this variable - * is one, RCU must actually do all the hard work required to detect real - * grace periods. This variable is also used to suppress boot-time false - * positives from lockdep-RCU error checking. + * is RCU_SCHEDULER_INIT, RCU must actually do all the hard work required + * to detect real grace periods. This variable is also used to suppress + * boot-time false positives from lockdep-RCU error checking. Finally, it + * transitions from RCU_SCHEDULER_INIT to RCU_SCHEDULER_RUNNING after RCU + * is fully initialized, including all of its kthreads having been spawned. */ int rcu_scheduler_active __read_mostly; EXPORT_SYMBOL_GPL(rcu_scheduler_active); @@ -3980,18 +3983,22 @@ static int __init rcu_spawn_gp_kthread(void) early_initcall(rcu_spawn_gp_kthread); /* - * This function is invoked towards the end of the scheduler's initialization - * process. Before this is called, the idle task might contain - * RCU read-side critical sections (during which time, this idle - * task is booting the system). After this function is called, the - * idle tasks are prohibited from containing RCU read-side critical - * sections. This function also enables RCU lockdep checking. + * This function is invoked towards the end of the scheduler's + * initialization process. Before this is called, the idle task might + * contain synchronous grace-period primitives (during which time, this idle + * task is booting the system, and such primitives are no-ops). After this + * function is called, any synchronous grace-period primitives are run as + * expedited, with the requesting task driving the grace period forward. + * A later core_initcall() rcu_exp_runtime_mode() will switch to full + * runtime RCU functionality. */ void rcu_scheduler_starting(void) { WARN_ON(num_online_cpus() != 1); WARN_ON(nr_context_switches() > 0); - rcu_scheduler_active = 1; + rcu_test_sync_prims(); + rcu_scheduler_active = RCU_SCHEDULER_INIT; + rcu_test_sync_prims(); } /* diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index d3053e99fdb67d..e59e1849b89aca 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -531,6 +531,20 @@ struct rcu_exp_work { struct work_struct rew_work; }; +/* + * Common code to drive an expedited grace period forward, used by + * workqueues and mid-boot-time tasks. + */ +static void rcu_exp_sel_wait_wake(struct rcu_state *rsp, + smp_call_func_t func, unsigned long s) +{ + /* Initialize the rcu_node tree in preparation for the wait. */ + sync_rcu_exp_select_cpus(rsp, func); + + /* Wait and clean up, including waking everyone. */ + rcu_exp_wait_wake(rsp, s); +} + /* * Work-queue handler to drive an expedited grace period forward. */ @@ -538,12 +552,8 @@ static void wait_rcu_exp_gp(struct work_struct *wp) { struct rcu_exp_work *rewp; - /* Initialize the rcu_node tree in preparation for the wait. */ rewp = container_of(wp, struct rcu_exp_work, rew_work); - sync_rcu_exp_select_cpus(rewp->rew_rsp, rewp->rew_func); - - /* Wait and clean up, including waking everyone. */ - rcu_exp_wait_wake(rewp->rew_rsp, rewp->rew_s); + rcu_exp_sel_wait_wake(rewp->rew_rsp, rewp->rew_func, rewp->rew_s); } /* @@ -569,12 +579,18 @@ static void _synchronize_rcu_expedited(struct rcu_state *rsp, if (exp_funnel_lock(rsp, s)) return; /* Someone else did our work for us. */ - /* Marshall arguments and schedule the expedited grace period. */ - rew.rew_func = func; - rew.rew_rsp = rsp; - rew.rew_s = s; - INIT_WORK_ONSTACK(&rew.rew_work, wait_rcu_exp_gp); - schedule_work(&rew.rew_work); + /* Ensure that load happens before action based on it. */ + if (unlikely(rcu_scheduler_active == RCU_SCHEDULER_INIT)) { + /* Direct call during scheduler init and early_initcalls(). */ + rcu_exp_sel_wait_wake(rsp, func, s); + } else { + /* Marshall arguments & schedule the expedited grace period. */ + rew.rew_func = func; + rew.rew_rsp = rsp; + rew.rew_s = s; + INIT_WORK_ONSTACK(&rew.rew_work, wait_rcu_exp_gp); + schedule_work(&rew.rew_work); + } /* Wait for expedited grace period to complete. */ rdp = per_cpu_ptr(rsp->rda, raw_smp_processor_id()); @@ -676,6 +692,8 @@ void synchronize_rcu_expedited(void) { struct rcu_state *rsp = rcu_state_p; + if (rcu_scheduler_active == RCU_SCHEDULER_INACTIVE) + return; _synchronize_rcu_expedited(rsp, sync_rcu_exp_handler); } EXPORT_SYMBOL_GPL(synchronize_rcu_expedited); @@ -693,3 +711,15 @@ void synchronize_rcu_expedited(void) EXPORT_SYMBOL_GPL(synchronize_rcu_expedited); #endif /* #else #ifdef CONFIG_PREEMPT_RCU */ + +/* + * Switch to run-time mode once Tree RCU has fully initialized. + */ +static int __init rcu_exp_runtime_mode(void) +{ + rcu_test_sync_prims(); + rcu_scheduler_active = RCU_SCHEDULER_RUNNING; + rcu_test_sync_prims(); + return 0; +} +core_initcall(rcu_exp_runtime_mode); diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 85c5a883c6e310..56583e764ebf39 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -670,7 +670,7 @@ void synchronize_rcu(void) lock_is_held(&rcu_lock_map) || lock_is_held(&rcu_sched_lock_map), "Illegal synchronize_rcu() in RCU read-side critical section"); - if (!rcu_scheduler_active) + if (rcu_scheduler_active == RCU_SCHEDULER_INACTIVE) return; if (rcu_gp_is_expedited()) synchronize_rcu_expedited(); diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c index f19271dce0a978..4f6db7e6a1179e 100644 --- a/kernel/rcu/update.c +++ b/kernel/rcu/update.c @@ -121,11 +121,14 @@ EXPORT_SYMBOL(rcu_read_lock_sched_held); * Should expedited grace-period primitives always fall back to their * non-expedited counterparts? Intended for use within RCU. Note * that if the user specifies both rcu_expedited and rcu_normal, then - * rcu_normal wins. + * rcu_normal wins. (Except during the time period during boot from + * when the first task is spawned until the rcu_exp_runtime_mode() + * core_initcall() is invoked, at which point everything is expedited.) */ bool rcu_gp_is_normal(void) { - return READ_ONCE(rcu_normal); + return READ_ONCE(rcu_normal) && + rcu_scheduler_active != RCU_SCHEDULER_INIT; } EXPORT_SYMBOL_GPL(rcu_gp_is_normal); @@ -135,13 +138,14 @@ static atomic_t rcu_expedited_nesting = /* * Should normal grace-period primitives be expedited? Intended for * use within RCU. Note that this function takes the rcu_expedited - * sysfs/boot variable into account as well as the rcu_expedite_gp() - * nesting. So looping on rcu_unexpedite_gp() until rcu_gp_is_expedited() - * returns false is a -really- bad idea. + * sysfs/boot variable and rcu_scheduler_active into account as well + * as the rcu_expedite_gp() nesting. So looping on rcu_unexpedite_gp() + * until rcu_gp_is_expedited() returns false is a -really- bad idea. */ bool rcu_gp_is_expedited(void) { - return rcu_expedited || atomic_read(&rcu_expedited_nesting); + return rcu_expedited || atomic_read(&rcu_expedited_nesting) || + rcu_scheduler_active == RCU_SCHEDULER_INIT; } EXPORT_SYMBOL_GPL(rcu_gp_is_expedited); @@ -257,7 +261,7 @@ EXPORT_SYMBOL_GPL(rcu_callback_map); int notrace debug_lockdep_rcu_enabled(void) { - return rcu_scheduler_active && debug_locks && + return rcu_scheduler_active != RCU_SCHEDULER_INACTIVE && debug_locks && current->lockdep_recursion == 0; } EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled); @@ -591,7 +595,7 @@ EXPORT_SYMBOL_GPL(call_rcu_tasks); void synchronize_rcu_tasks(void) { /* Complain if the scheduler has not started. */ - RCU_LOCKDEP_WARN(!rcu_scheduler_active, + RCU_LOCKDEP_WARN(rcu_scheduler_active == RCU_SCHEDULER_INACTIVE, "synchronize_rcu_tasks called too soon"); /* Wait for the grace period. */ @@ -813,6 +817,23 @@ static void rcu_spawn_tasks_kthread(void) #endif /* #ifdef CONFIG_TASKS_RCU */ +/* + * Test each non-SRCU synchronous grace-period wait API. This is + * useful just after a change in mode for these primitives, and + * during early boot. + */ +void rcu_test_sync_prims(void) +{ + if (!IS_ENABLED(CONFIG_PROVE_RCU)) + return; + synchronize_rcu(); + synchronize_rcu_bh(); + synchronize_sched(); + synchronize_rcu_expedited(); + synchronize_rcu_bh_expedited(); + synchronize_sched_expedited(); +} + #ifdef CONFIG_PROVE_RCU /* @@ -865,6 +886,7 @@ void rcu_early_boot_tests(void) early_boot_test_call_rcu_bh(); if (rcu_self_test_sched) early_boot_test_call_rcu_sched(); + rcu_test_sync_prims(); } static int rcu_verify_early_boot_tests(void) diff --git a/lib/swiotlb.c b/lib/swiotlb.c index 975b8fc4f1e114..a8d74a733a38b5 100644 --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -483,11 +483,11 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, : 1UL << (BITS_PER_LONG - IO_TLB_SHIFT); /* - * For mappings greater than a page, we limit the stride (and - * hence alignment) to a page size. + * For mappings greater than or equal to a page, we limit the stride + * (and hence alignment) to a page size. */ nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; - if (size > PAGE_SIZE) + if (size >= PAGE_SIZE) stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT)); else stride = 1; diff --git a/net/ax25/ax25_subr.c b/net/ax25/ax25_subr.c index 4855d18a851107..038b109b2be70e 100644 --- a/net/ax25/ax25_subr.c +++ b/net/ax25/ax25_subr.c @@ -264,7 +264,7 @@ void ax25_disconnect(ax25_cb *ax25, int reason) { ax25_clear_queues(ax25); - if (!sock_flag(ax25->sk, SOCK_DESTROY)) + if (!ax25->sk || !sock_flag(ax25->sk, SOCK_DESTROY)) ax25_stop_heartbeat(ax25); ax25_stop_t1timer(ax25); ax25_stop_t2timer(ax25); diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c index eba1546b5031e6..9a375b908d01fa 100644 --- a/net/ipv4/fib_semantics.c +++ b/net/ipv4/fib_semantics.c @@ -1279,8 +1279,9 @@ int fib_dump_info(struct sk_buff *skb, u32 portid, u32 seq, int event, nla_put_u32(skb, RTA_FLOW, fi->fib_nh[0].nh_tclassid)) goto nla_put_failure; #endif - if (fi->fib_nh->nh_lwtstate) - lwtunnel_fill_encap(skb, fi->fib_nh->nh_lwtstate); + if (fi->fib_nh->nh_lwtstate && + lwtunnel_fill_encap(skb, fi->fib_nh->nh_lwtstate) < 0) + goto nla_put_failure; } #ifdef CONFIG_IP_ROUTE_MULTIPATH if (fi->fib_nhs > 1) { @@ -1316,8 +1317,10 @@ int fib_dump_info(struct sk_buff *skb, u32 portid, u32 seq, int event, nla_put_u32(skb, RTA_FLOW, nh->nh_tclassid)) goto nla_put_failure; #endif - if (nh->nh_lwtstate) - lwtunnel_fill_encap(skb, nh->nh_lwtstate); + if (nh->nh_lwtstate && + lwtunnel_fill_encap(skb, nh->nh_lwtstate) < 0) + goto nla_put_failure; + /* length of rtnetlink header + attributes */ rtnh->rtnh_len = nlmsg_get_pos(skb) - (void *) rtnh; } endfor_nexthops(fi); diff --git a/net/ipv4/route.c b/net/ipv4/route.c index 0fcac8e7a2b2fb..709ffe67d1de16 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -2472,7 +2472,7 @@ static int rt_fill_info(struct net *net, __be32 dst, __be32 src, u32 table_id, r->rtm_dst_len = 32; r->rtm_src_len = 0; r->rtm_tos = fl4->flowi4_tos; - r->rtm_table = table_id; + r->rtm_table = table_id < 256 ? table_id : RT_TABLE_COMPAT; if (nla_put_u32(skb, RTA_TABLE, table_id)) goto nla_put_failure; r->rtm_type = rt->rt_type; diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c index 4e777a3243f944..f51919535ca763 100644 --- a/net/ipv4/tcp_fastopen.c +++ b/net/ipv4/tcp_fastopen.c @@ -113,7 +113,7 @@ static bool tcp_fastopen_cookie_gen(struct request_sock *req, struct tcp_fastopen_cookie tmp; if (__tcp_fastopen_cookie_gen(&ip6h->saddr, &tmp)) { - struct in6_addr *buf = (struct in6_addr *) tmp.val; + struct in6_addr *buf = &tmp.addr; int i; for (i = 0; i < 4; i++) diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c index 36d2921809428e..753d6d0860fb14 100644 --- a/net/ipv6/ip6_tunnel.c +++ b/net/ipv6/ip6_tunnel.c @@ -1108,7 +1108,7 @@ int ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev, __u8 dsfield, t->parms.name); goto tx_err_dst_release; } - mtu = dst_mtu(dst) - psh_hlen; + mtu = dst_mtu(dst) - psh_hlen - t->tun_hlen; if (encap_limit >= 0) { max_headroom += 8; mtu -= 8; @@ -1117,7 +1117,7 @@ int ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev, __u8 dsfield, mtu = IPV6_MIN_MTU; if (skb_dst(skb) && !t->parms.collect_md) skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu); - if (skb->len > mtu && !skb_is_gso(skb)) { + if (skb->len - t->tun_hlen > mtu && !skb_is_gso(skb)) { *pmtu = mtu; err = -EMSGSIZE; goto tx_err_dst_release; diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c index 14a3903f1c82d8..7139fffd61b6f7 100644 --- a/net/ipv6/mcast.c +++ b/net/ipv6/mcast.c @@ -81,7 +81,7 @@ static void mld_gq_timer_expire(unsigned long data); static void mld_ifc_timer_expire(unsigned long data); static void mld_ifc_event(struct inet6_dev *idev); static void mld_add_delrec(struct inet6_dev *idev, struct ifmcaddr6 *pmc); -static void mld_del_delrec(struct inet6_dev *idev, const struct in6_addr *addr); +static void mld_del_delrec(struct inet6_dev *idev, struct ifmcaddr6 *pmc); static void mld_clear_delrec(struct inet6_dev *idev); static bool mld_in_v1_mode(const struct inet6_dev *idev); static int sf_setstate(struct ifmcaddr6 *pmc); @@ -692,9 +692,9 @@ static void igmp6_group_dropped(struct ifmcaddr6 *mc) dev_mc_del(dev, buf); } - if (mc->mca_flags & MAF_NOREPORT) - goto done; spin_unlock_bh(&mc->mca_lock); + if (mc->mca_flags & MAF_NOREPORT) + return; if (!mc->idev->dead) igmp6_leave_group(mc); @@ -702,8 +702,6 @@ static void igmp6_group_dropped(struct ifmcaddr6 *mc) spin_lock_bh(&mc->mca_lock); if (del_timer(&mc->mca_timer)) atomic_dec(&mc->mca_refcnt); -done: - ip6_mc_clear_src(mc); spin_unlock_bh(&mc->mca_lock); } @@ -748,10 +746,11 @@ static void mld_add_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im) spin_unlock_bh(&idev->mc_lock); } -static void mld_del_delrec(struct inet6_dev *idev, const struct in6_addr *pmca) +static void mld_del_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im) { struct ifmcaddr6 *pmc, *pmc_prev; - struct ip6_sf_list *psf, *psf_next; + struct ip6_sf_list *psf; + struct in6_addr *pmca = &im->mca_addr; spin_lock_bh(&idev->mc_lock); pmc_prev = NULL; @@ -768,14 +767,20 @@ static void mld_del_delrec(struct inet6_dev *idev, const struct in6_addr *pmca) } spin_unlock_bh(&idev->mc_lock); + spin_lock_bh(&im->mca_lock); if (pmc) { - for (psf = pmc->mca_tomb; psf; psf = psf_next) { - psf_next = psf->sf_next; - kfree(psf); + im->idev = pmc->idev; + im->mca_crcount = idev->mc_qrv; + im->mca_sfmode = pmc->mca_sfmode; + if (pmc->mca_sfmode == MCAST_INCLUDE) { + im->mca_tomb = pmc->mca_tomb; + im->mca_sources = pmc->mca_sources; + for (psf = im->mca_sources; psf; psf = psf->sf_next) + psf->sf_crcount = im->mca_crcount; } in6_dev_put(pmc->idev); - kfree(pmc); } + spin_unlock_bh(&im->mca_lock); } static void mld_clear_delrec(struct inet6_dev *idev) @@ -904,7 +909,7 @@ int ipv6_dev_mc_inc(struct net_device *dev, const struct in6_addr *addr) mca_get(mc); write_unlock_bh(&idev->lock); - mld_del_delrec(idev, &mc->mca_addr); + mld_del_delrec(idev, mc); igmp6_group_added(mc); ma_put(mc); return 0; @@ -927,6 +932,7 @@ int __ipv6_dev_mc_dec(struct inet6_dev *idev, const struct in6_addr *addr) write_unlock_bh(&idev->lock); igmp6_group_dropped(ma); + ip6_mc_clear_src(ma); ma_put(ma); return 0; @@ -2501,15 +2507,17 @@ void ipv6_mc_down(struct inet6_dev *idev) /* Withdraw multicast list */ read_lock_bh(&idev->lock); - mld_ifc_stop_timer(idev); - mld_gq_stop_timer(idev); - mld_dad_stop_timer(idev); for (i = idev->mc_list; i; i = i->next) igmp6_group_dropped(i); - read_unlock_bh(&idev->lock); - mld_clear_delrec(idev); + /* Should stop timer after group drop. or we will + * start timer again in mld_ifc_event() + */ + mld_ifc_stop_timer(idev); + mld_gq_stop_timer(idev); + mld_dad_stop_timer(idev); + read_unlock_bh(&idev->lock); } static void ipv6_mc_reset(struct inet6_dev *idev) @@ -2531,8 +2539,10 @@ void ipv6_mc_up(struct inet6_dev *idev) read_lock_bh(&idev->lock); ipv6_mc_reset(idev); - for (i = idev->mc_list; i; i = i->next) + for (i = idev->mc_list; i; i = i->next) { + mld_del_delrec(idev, i); igmp6_group_added(i); + } read_unlock_bh(&idev->lock); } @@ -2565,6 +2575,7 @@ void ipv6_mc_destroy_dev(struct inet6_dev *idev) /* Deactivate timers */ ipv6_mc_down(idev); + mld_clear_delrec(idev); /* Delete all-nodes address. */ /* We cannot call ipv6_dev_mc_dec() directly, our caller in @@ -2579,11 +2590,9 @@ void ipv6_mc_destroy_dev(struct inet6_dev *idev) write_lock_bh(&idev->lock); while ((i = idev->mc_list) != NULL) { idev->mc_list = i->next; - write_unlock_bh(&idev->lock); - igmp6_group_dropped(i); + write_unlock_bh(&idev->lock); ma_put(i); - write_lock_bh(&idev->lock); } write_unlock_bh(&idev->lock); diff --git a/net/ipv6/route.c b/net/ipv6/route.c index ce5aaf448c5410..4f6b067c8753a5 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -3317,7 +3317,8 @@ static int rt6_fill_node(struct net *net, if (nla_put_u8(skb, RTA_PREF, IPV6_EXTRACT_PREF(rt->rt6i_flags))) goto nla_put_failure; - lwtunnel_fill_encap(skb, rt->dst.lwtstate); + if (lwtunnel_fill_encap(skb, rt->dst.lwtstate) < 0) + goto nla_put_failure; nlmsg_end(skb, nlh); return 0; diff --git a/net/ipv6/seg6_hmac.c b/net/ipv6/seg6_hmac.c index ef1c8a46e7acee..03a06480362689 100644 --- a/net/ipv6/seg6_hmac.c +++ b/net/ipv6/seg6_hmac.c @@ -400,7 +400,7 @@ static int seg6_hmac_init_algo(void) *p_tfm = tfm; } - p_tfm = this_cpu_ptr(algo->tfms); + p_tfm = raw_cpu_ptr(algo->tfms); tfm = *p_tfm; shsize = sizeof(*shash) + crypto_shash_descsize(tfm); diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c index bbfca22c34aeec..1d60cb132835c9 100644 --- a/net/ipv6/seg6_iptunnel.c +++ b/net/ipv6/seg6_iptunnel.c @@ -265,7 +265,9 @@ int seg6_output(struct net *net, struct sock *sk, struct sk_buff *skb) slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate); #ifdef CONFIG_DST_CACHE + preempt_disable(); dst = dst_cache_get(&slwt->cache); + preempt_enable(); #endif if (unlikely(!dst)) { @@ -286,7 +288,9 @@ int seg6_output(struct net *net, struct sock *sk, struct sk_buff *skb) } #ifdef CONFIG_DST_CACHE + preempt_disable(); dst_cache_set_ip6(&slwt->cache, dst, &fl6.saddr); + preempt_enable(); #endif } diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c index e75cbf6ecc26e4..a0d901d8992ea8 100644 --- a/net/mac80211/chan.c +++ b/net/mac80211/chan.c @@ -231,9 +231,6 @@ ieee80211_get_max_required_bw(struct ieee80211_sub_if_data *sdata) !(sta->sdata->bss && sta->sdata->bss == sdata->bss)) continue; - if (!sta->uploaded || !test_sta_flag(sta, WLAN_STA_ASSOC)) - continue; - max_bw = max(max_bw, ieee80211_get_sta_bw(&sta->sta)); } rcu_read_unlock(); diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c index 41497b670e2bde..d37ae7dc114b2c 100644 --- a/net/mac80211/iface.c +++ b/net/mac80211/iface.c @@ -6,6 +6,7 @@ * Copyright (c) 2006 Jiri Benc * Copyright 2008, Johannes Berg * Copyright 2013-2014 Intel Mobile Communications GmbH + * Copyright (c) 2016 Intel Deutschland GmbH * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -1295,6 +1296,26 @@ static void ieee80211_iface_work(struct work_struct *work) } else if (ieee80211_is_action(mgmt->frame_control) && mgmt->u.action.category == WLAN_CATEGORY_VHT) { switch (mgmt->u.action.u.vht_group_notif.action_code) { + case WLAN_VHT_ACTION_OPMODE_NOTIF: { + struct ieee80211_rx_status *status; + enum nl80211_band band; + u8 opmode; + + status = IEEE80211_SKB_RXCB(skb); + band = status->band; + opmode = mgmt->u.action.u.vht_opmode_notif.operating_mode; + + mutex_lock(&local->sta_mtx); + sta = sta_info_get_bss(sdata, mgmt->sa); + + if (sta) + ieee80211_vht_handle_opmode(sdata, sta, + opmode, + band); + + mutex_unlock(&local->sta_mtx); + break; + } case WLAN_VHT_ACTION_GROUPID_MGMT: ieee80211_process_mu_groups(sdata, mgmt); break; diff --git a/net/mac80211/main.c b/net/mac80211/main.c index 1822c77f2b1c31..56fb47953b7242 100644 --- a/net/mac80211/main.c +++ b/net/mac80211/main.c @@ -913,12 +913,17 @@ int ieee80211_register_hw(struct ieee80211_hw *hw) supp_ht = supp_ht || sband->ht_cap.ht_supported; supp_vht = supp_vht || sband->vht_cap.vht_supported; - if (sband->ht_cap.ht_supported) - local->rx_chains = - max(ieee80211_mcs_to_chains(&sband->ht_cap.mcs), - local->rx_chains); + if (!sband->ht_cap.ht_supported) + continue; /* TODO: consider VHT for RX chains, hopefully it's the same */ + local->rx_chains = + max(ieee80211_mcs_to_chains(&sband->ht_cap.mcs), + local->rx_chains); + + /* no need to mask, SM_PS_DISABLED has all bits set */ + sband->ht_cap.cap |= WLAN_HT_CAP_SM_PS_DISABLED << + IEEE80211_HT_CAP_SM_PS_SHIFT; } /* if low-level driver supports AP, we also support VLAN */ diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c index 206698bc93f406..9e2641d4558753 100644 --- a/net/mac80211/rate.c +++ b/net/mac80211/rate.c @@ -40,6 +40,8 @@ void rate_control_rate_init(struct sta_info *sta) ieee80211_sta_set_rx_nss(sta); + ieee80211_recalc_min_chandef(sta->sdata); + if (!ref) return; diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c index 3e289a64ed4317..3090dd4342f6ee 100644 --- a/net/mac80211/rx.c +++ b/net/mac80211/rx.c @@ -2472,7 +2472,8 @@ ieee80211_rx_h_mesh_fwding(struct ieee80211_rx_data *rx) if (!ifmsh->mshcfg.dot11MeshForwarding) goto out; - fwd_skb = skb_copy_expand(skb, local->tx_headroom, 0, GFP_ATOMIC); + fwd_skb = skb_copy_expand(skb, local->tx_headroom + + sdata->encrypt_headroom, 0, GFP_ATOMIC); if (!fwd_skb) { net_info_ratelimited("%s: failed to clone mesh frame\n", sdata->name); @@ -2880,17 +2881,10 @@ ieee80211_rx_h_action(struct ieee80211_rx_data *rx) switch (mgmt->u.action.u.vht_opmode_notif.action_code) { case WLAN_VHT_ACTION_OPMODE_NOTIF: { - u8 opmode; - /* verify opmode is present */ if (len < IEEE80211_MIN_ACTION_SIZE + 2) goto invalid; - - opmode = mgmt->u.action.u.vht_opmode_notif.operating_mode; - - ieee80211_vht_handle_opmode(rx->sdata, rx->sta, - opmode, status->band); - goto handled; + goto queue; } case WLAN_VHT_ACTION_GROUPID_MGMT: { if (len < IEEE80211_MIN_ACTION_SIZE + 25) @@ -3942,21 +3936,31 @@ static bool ieee80211_invoke_fast_rx(struct ieee80211_rx_data *rx, u64_stats_update_end(&stats->syncp); if (fast_rx->internal_forward) { - struct sta_info *dsta = sta_info_get(rx->sdata, skb->data); + struct sk_buff *xmit_skb = NULL; + bool multicast = is_multicast_ether_addr(skb->data); + + if (multicast) { + xmit_skb = skb_copy(skb, GFP_ATOMIC); + } else if (sta_info_get(rx->sdata, skb->data)) { + xmit_skb = skb; + skb = NULL; + } - if (dsta) { + if (xmit_skb) { /* * Send to wireless media and increase priority by 256 * to keep the received priority instead of * reclassifying the frame (see cfg80211_classify8021d). */ - skb->priority += 256; - skb->protocol = htons(ETH_P_802_3); - skb_reset_network_header(skb); - skb_reset_mac_header(skb); - dev_queue_xmit(skb); - return true; + xmit_skb->priority += 256; + xmit_skb->protocol = htons(ETH_P_802_3); + skb_reset_network_header(xmit_skb); + skb_reset_mac_header(xmit_skb); + dev_queue_xmit(xmit_skb); } + + if (!skb) + return true; } /* deliver to local stack */ diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c index b6cfcf038c11fa..50c309094c37ba 100644 --- a/net/mac80211/sta_info.c +++ b/net/mac80211/sta_info.c @@ -1501,8 +1501,8 @@ ieee80211_sta_ps_deliver_response(struct sta_info *sta, /* This will evaluate to 1, 3, 5 or 7. */ for (ac = IEEE80211_AC_VO; ac < IEEE80211_NUM_ACS; ac++) - if (ignored_acs & BIT(ac)) - continue; + if (!(ignored_acs & ieee80211_ac_to_qos_mask[ac])) + break; tid = 7 - 2 * ac; ieee80211_send_null_response(sta, tid, reason, true, false); diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c index 0d8b716e509edc..797e847cbc49a1 100644 --- a/net/mac80211/tx.c +++ b/net/mac80211/tx.c @@ -1243,7 +1243,7 @@ ieee80211_tx_prepare(struct ieee80211_sub_if_data *sdata, static struct txq_info *ieee80211_get_txq(struct ieee80211_local *local, struct ieee80211_vif *vif, - struct ieee80211_sta *pubsta, + struct sta_info *sta, struct sk_buff *skb) { struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data; @@ -1257,10 +1257,13 @@ static struct txq_info *ieee80211_get_txq(struct ieee80211_local *local, if (!ieee80211_is_data(hdr->frame_control)) return NULL; - if (pubsta) { + if (sta) { u8 tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK; - txq = pubsta->txq[tid]; + if (!sta->uploaded) + return NULL; + + txq = sta->sta.txq[tid]; } else if (vif) { txq = vif->txq; } @@ -1503,23 +1506,17 @@ static bool ieee80211_queue_skb(struct ieee80211_local *local, struct fq *fq = &local->fq; struct ieee80211_vif *vif; struct txq_info *txqi; - struct ieee80211_sta *pubsta; if (!local->ops->wake_tx_queue || sdata->vif.type == NL80211_IFTYPE_MONITOR) return false; - if (sta && sta->uploaded) - pubsta = &sta->sta; - else - pubsta = NULL; - if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN) sdata = container_of(sdata->bss, struct ieee80211_sub_if_data, u.ap); vif = &sdata->vif; - txqi = ieee80211_get_txq(local, vif, pubsta, skb); + txqi = ieee80211_get_txq(local, vif, sta, skb); if (!txqi) return false; diff --git a/net/mac80211/vht.c b/net/mac80211/vht.c index 6832bf6ab69fe0..43e45bb660bcde 100644 --- a/net/mac80211/vht.c +++ b/net/mac80211/vht.c @@ -527,8 +527,10 @@ void ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata, u32 changed = __ieee80211_vht_handle_opmode(sdata, sta, opmode, band); - if (changed > 0) + if (changed > 0) { + ieee80211_recalc_min_chandef(sdata); rate_control_rate_update(local, sband, sta, changed); + } } void ieee80211_get_vht_mask_from_cap(__le16 vht_cap, diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c index 6b78bab27755b2..54253ea5976e69 100644 --- a/net/openvswitch/conntrack.c +++ b/net/openvswitch/conntrack.c @@ -514,7 +514,7 @@ static int ovs_ct_nat_execute(struct sk_buff *skb, struct nf_conn *ct, int hooknum, nh_off, err = NF_ACCEPT; nh_off = skb_network_offset(skb); - skb_pull(skb, nh_off); + skb_pull_rcsum(skb, nh_off); /* See HOOK2MANIP(). */ if (maniptype == NF_NAT_MANIP_SRC) @@ -579,6 +579,7 @@ static int ovs_ct_nat_execute(struct sk_buff *skb, struct nf_conn *ct, err = nf_nat_packet(ct, ctinfo, hooknum, skb); push: skb_push(skb, nh_off); + skb_postpush_rcsum(skb, skb->data, nh_off); return err; } @@ -886,7 +887,7 @@ int ovs_ct_execute(struct net *net, struct sk_buff *skb, /* The conntrack module expects to be working at L3. */ nh_ofs = skb_network_offset(skb); - skb_pull(skb, nh_ofs); + skb_pull_rcsum(skb, nh_ofs); if (key->ip.frag != OVS_FRAG_TYPE_NONE) { err = handle_fragments(net, key, info->zone.id, skb); @@ -900,6 +901,7 @@ int ovs_ct_execute(struct net *net, struct sk_buff *skb, err = ovs_ct_lookup(net, key, info, skb); skb_push(skb, nh_ofs); + skb_postpush_rcsum(skb, skb->data, nh_ofs); if (err) kfree_skb(skb); return err; diff --git a/net/sched/act_api.c b/net/sched/act_api.c index 2095c83ce7730d..e10456ef6f7a43 100644 --- a/net/sched/act_api.c +++ b/net/sched/act_api.c @@ -900,8 +900,6 @@ tca_action_gd(struct net *net, struct nlattr *nla, struct nlmsghdr *n, goto err; } act->order = i; - if (event == RTM_GETACTION) - act->tcfa_refcnt++; list_add_tail(&act->list, &actions); } @@ -914,7 +912,8 @@ tca_action_gd(struct net *net, struct nlattr *nla, struct nlmsghdr *n, return ret; } err: - tcf_action_destroy(&actions, 0); + if (event != RTM_GETACTION) + tcf_action_destroy(&actions, 0); return ret; } diff --git a/net/sched/act_bpf.c b/net/sched/act_bpf.c index 1c60317f01214a..520baa41cba3a8 100644 --- a/net/sched/act_bpf.c +++ b/net/sched/act_bpf.c @@ -123,12 +123,11 @@ static int tcf_bpf_dump_ebpf_info(const struct tcf_bpf *prog, nla_put_string(skb, TCA_ACT_BPF_NAME, prog->bpf_name)) return -EMSGSIZE; - nla = nla_reserve(skb, TCA_ACT_BPF_DIGEST, - sizeof(prog->filter->digest)); + nla = nla_reserve(skb, TCA_ACT_BPF_TAG, sizeof(prog->filter->tag)); if (nla == NULL) return -EMSGSIZE; - memcpy(nla_data(nla), prog->filter->digest, nla_len(nla)); + memcpy(nla_data(nla), prog->filter->tag, nla_len(nla)); return 0; } diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c index adc776048d1a89..d9c97018317dd7 100644 --- a/net/sched/cls_bpf.c +++ b/net/sched/cls_bpf.c @@ -555,11 +555,11 @@ static int cls_bpf_dump_ebpf_info(const struct cls_bpf_prog *prog, nla_put_string(skb, TCA_BPF_NAME, prog->bpf_name)) return -EMSGSIZE; - nla = nla_reserve(skb, TCA_BPF_DIGEST, sizeof(prog->filter->digest)); + nla = nla_reserve(skb, TCA_BPF_TAG, sizeof(prog->filter->tag)); if (nla == NULL) return -EMSGSIZE; - memcpy(nla_data(nla), prog->filter->digest, nla_len(nla)); + memcpy(nla_data(nla), prog->filter->tag, nla_len(nla)); return 0; } diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c index 886e9d381771ab..1530825985221a 100644 --- a/net/sunrpc/auth_gss/svcauth_gss.c +++ b/net/sunrpc/auth_gss/svcauth_gss.c @@ -1489,7 +1489,7 @@ svcauth_gss_accept(struct svc_rqst *rqstp, __be32 *authp) case RPC_GSS_PROC_DESTROY: if (gss_write_verf(rqstp, rsci->mechctx, gc->gc_seq)) goto auth_err; - rsci->h.expiry_time = get_seconds(); + rsci->h.expiry_time = seconds_since_boot(); set_bit(CACHE_NEGATIVE, &rsci->h.flags); if (resv->iov_len + 4 > PAGE_SIZE) goto drop; diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c index 3bc1d61694cbbb..9c9db55a0c1e17 100644 --- a/net/sunrpc/svc_xprt.c +++ b/net/sunrpc/svc_xprt.c @@ -799,6 +799,8 @@ static int svc_handle_xprt(struct svc_rqst *rqstp, struct svc_xprt *xprt) if (test_bit(XPT_CLOSE, &xprt->xpt_flags)) { dprintk("svc_recv: found XPT_CLOSE\n"); + if (test_and_clear_bit(XPT_KILL_TEMP, &xprt->xpt_flags)) + xprt->xpt_ops->xpo_kill_temp_xprt(xprt); svc_delete_xprt(xprt); /* Leave XPT_BUSY set on the dead xprt: */ goto out; @@ -1020,9 +1022,11 @@ void svc_age_temp_xprts_now(struct svc_serv *serv, struct sockaddr *server_addr) le = to_be_closed.next; list_del_init(le); xprt = list_entry(le, struct svc_xprt, xpt_list); - dprintk("svc_age_temp_xprts_now: closing %p\n", xprt); - xprt->xpt_ops->xpo_kill_temp_xprt(xprt); - svc_close_xprt(xprt); + set_bit(XPT_CLOSE, &xprt->xpt_flags); + set_bit(XPT_KILL_TEMP, &xprt->xpt_flags); + dprintk("svc_age_temp_xprts_now: queuing xprt %p for closing\n", + xprt); + svc_xprt_enqueue(xprt); } } EXPORT_SYMBOL_GPL(svc_age_temp_xprts_now); diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c index 57d35fbb1c2857..172b537f8cfc94 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c +++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c @@ -347,8 +347,6 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt, atomic_inc(&rdma_stat_read); return ret; err: - ib_dma_unmap_sg(xprt->sc_cm_id->device, - frmr->sg, frmr->sg_nents, frmr->direction); svc_rdma_put_context(ctxt, 0); svc_rdma_put_frmr(xprt, frmr); return ret; diff --git a/net/tipc/discover.c b/net/tipc/discover.c index 6b109a808d4c5f..02462d67d1914b 100644 --- a/net/tipc/discover.c +++ b/net/tipc/discover.c @@ -169,7 +169,7 @@ void tipc_disc_rcv(struct net *net, struct sk_buff *skb, /* Send response, if necessary */ if (respond && (mtyp == DSC_REQ_MSG)) { - rskb = tipc_buf_acquire(MAX_H_SIZE); + rskb = tipc_buf_acquire(MAX_H_SIZE, GFP_ATOMIC); if (!rskb) return; tipc_disc_init_msg(net, rskb, DSC_RESP_MSG, bearer); @@ -278,7 +278,7 @@ int tipc_disc_create(struct net *net, struct tipc_bearer *b, req = kmalloc(sizeof(*req), GFP_ATOMIC); if (!req) return -ENOMEM; - req->buf = tipc_buf_acquire(MAX_H_SIZE); + req->buf = tipc_buf_acquire(MAX_H_SIZE, GFP_ATOMIC); if (!req->buf) { kfree(req); return -ENOMEM; diff --git a/net/tipc/link.c b/net/tipc/link.c index bda89bf9f4ff18..4e8647aef01c1d 100644 --- a/net/tipc/link.c +++ b/net/tipc/link.c @@ -1395,7 +1395,7 @@ void tipc_link_tnl_prepare(struct tipc_link *l, struct tipc_link *tnl, msg_set_seqno(hdr, seqno++); pktlen = msg_size(hdr); msg_set_size(&tnlhdr, pktlen + INT_H_SIZE); - tnlskb = tipc_buf_acquire(pktlen + INT_H_SIZE); + tnlskb = tipc_buf_acquire(pktlen + INT_H_SIZE, GFP_ATOMIC); if (!tnlskb) { pr_warn("%sunable to send packet\n", link_co_err); return; diff --git a/net/tipc/msg.c b/net/tipc/msg.c index a22be502f1bd06..ab02d07424764a 100644 --- a/net/tipc/msg.c +++ b/net/tipc/msg.c @@ -58,12 +58,12 @@ static unsigned int align(unsigned int i) * NOTE: Headroom is reserved to allow prepending of a data link header. * There may also be unrequested tailroom present at the buffer's end. */ -struct sk_buff *tipc_buf_acquire(u32 size) +struct sk_buff *tipc_buf_acquire(u32 size, gfp_t gfp) { struct sk_buff *skb; unsigned int buf_size = (BUF_HEADROOM + size + 3) & ~3u; - skb = alloc_skb_fclone(buf_size, GFP_ATOMIC); + skb = alloc_skb_fclone(buf_size, gfp); if (skb) { skb_reserve(skb, BUF_HEADROOM); skb_put(skb, size); @@ -95,7 +95,7 @@ struct sk_buff *tipc_msg_create(uint user, uint type, struct tipc_msg *msg; struct sk_buff *buf; - buf = tipc_buf_acquire(hdr_sz + data_sz); + buf = tipc_buf_acquire(hdr_sz + data_sz, GFP_ATOMIC); if (unlikely(!buf)) return NULL; @@ -261,7 +261,7 @@ int tipc_msg_build(struct tipc_msg *mhdr, struct msghdr *m, /* No fragmentation needed? */ if (likely(msz <= pktmax)) { - skb = tipc_buf_acquire(msz); + skb = tipc_buf_acquire(msz, GFP_KERNEL); if (unlikely(!skb)) return -ENOMEM; skb_orphan(skb); @@ -282,7 +282,7 @@ int tipc_msg_build(struct tipc_msg *mhdr, struct msghdr *m, msg_set_importance(&pkthdr, msg_importance(mhdr)); /* Prepare first fragment */ - skb = tipc_buf_acquire(pktmax); + skb = tipc_buf_acquire(pktmax, GFP_KERNEL); if (!skb) return -ENOMEM; skb_orphan(skb); @@ -313,7 +313,7 @@ int tipc_msg_build(struct tipc_msg *mhdr, struct msghdr *m, pktsz = drem + INT_H_SIZE; else pktsz = pktmax; - skb = tipc_buf_acquire(pktsz); + skb = tipc_buf_acquire(pktsz, GFP_KERNEL); if (!skb) { rc = -ENOMEM; goto error; @@ -448,7 +448,7 @@ bool tipc_msg_make_bundle(struct sk_buff **skb, struct tipc_msg *msg, if (msz > (max / 2)) return false; - _skb = tipc_buf_acquire(max); + _skb = tipc_buf_acquire(max, GFP_ATOMIC); if (!_skb) return false; @@ -496,7 +496,7 @@ bool tipc_msg_reverse(u32 own_node, struct sk_buff **skb, int err) /* Never return SHORT header; expand by replacing buffer if necessary */ if (msg_short(hdr)) { - *skb = tipc_buf_acquire(BASIC_H_SIZE + dlen); + *skb = tipc_buf_acquire(BASIC_H_SIZE + dlen, GFP_ATOMIC); if (!*skb) goto exit; memcpy((*skb)->data + BASIC_H_SIZE, msg_data(hdr), dlen); diff --git a/net/tipc/msg.h b/net/tipc/msg.h index 8d408612ffa490..2c3dc38abf9c25 100644 --- a/net/tipc/msg.h +++ b/net/tipc/msg.h @@ -820,7 +820,7 @@ static inline bool msg_is_reset(struct tipc_msg *hdr) return (msg_user(hdr) == LINK_PROTOCOL) && (msg_type(hdr) == RESET_MSG); } -struct sk_buff *tipc_buf_acquire(u32 size); +struct sk_buff *tipc_buf_acquire(u32 size, gfp_t gfp); bool tipc_msg_validate(struct sk_buff *skb); bool tipc_msg_reverse(u32 own_addr, struct sk_buff **skb, int err); void tipc_msg_init(u32 own_addr, struct tipc_msg *m, u32 user, u32 type, diff --git a/net/tipc/name_distr.c b/net/tipc/name_distr.c index c1cfd92de17aee..23f8899e0f8c3d 100644 --- a/net/tipc/name_distr.c +++ b/net/tipc/name_distr.c @@ -69,7 +69,7 @@ static struct sk_buff *named_prepare_buf(struct net *net, u32 type, u32 size, u32 dest) { struct tipc_net *tn = net_generic(net, tipc_net_id); - struct sk_buff *buf = tipc_buf_acquire(INT_H_SIZE + size); + struct sk_buff *buf = tipc_buf_acquire(INT_H_SIZE + size, GFP_ATOMIC); struct tipc_msg *msg; if (buf != NULL) { diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c index ef5eff93a8b817..5c1b267e22beef 100644 --- a/net/wireless/nl80211.c +++ b/net/wireless/nl80211.c @@ -4615,6 +4615,15 @@ int cfg80211_check_station_change(struct wiphy *wiphy, break; } + /* + * Older kernel versions ignored this attribute entirely, so don't + * reject attempts to update it but mark it as unused instead so the + * driver won't look at the data. + */ + if (statype != CFG80211_STA_AP_CLIENT_UNASSOC && + statype != CFG80211_STA_TDLS_PEER_SETUP) + params->opmode_notif_used = false; + return 0; } EXPORT_SYMBOL(cfg80211_check_station_change); @@ -4854,6 +4863,12 @@ static int nl80211_set_station(struct sk_buff *skb, struct genl_info *info) params.local_pm = pm; } + if (info->attrs[NL80211_ATTR_OPMODE_NOTIF]) { + params.opmode_notif_used = true; + params.opmode_notif = + nla_get_u8(info->attrs[NL80211_ATTR_OPMODE_NOTIF]); + } + /* Include parameters for TDLS peer (will check later) */ err = nl80211_set_station_tdls(info, ¶ms); if (err) diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c index 4a57c8a60bd919..6a6f44dd594bc4 100644 --- a/tools/perf/util/probe-event.c +++ b/tools/perf/util/probe-event.c @@ -610,6 +610,33 @@ static int find_perf_probe_point_from_dwarf(struct probe_trace_point *tp, return ret ? : -ENOENT; } +/* Adjust symbol name and address */ +static int post_process_probe_trace_point(struct probe_trace_point *tp, + struct map *map, unsigned long offs) +{ + struct symbol *sym; + u64 addr = tp->address + tp->offset - offs; + + sym = map__find_symbol(map, addr); + if (!sym) + return -ENOENT; + + if (strcmp(sym->name, tp->symbol)) { + /* If we have no realname, use symbol for it */ + if (!tp->realname) + tp->realname = tp->symbol; + else + free(tp->symbol); + tp->symbol = strdup(sym->name); + if (!tp->symbol) + return -ENOMEM; + } + tp->offset = addr - sym->start; + tp->address -= offs; + + return 0; +} + /* * Rename DWARF symbols to ELF symbols -- gcc sometimes optimizes functions * and generate new symbols with suffixes such as .constprop.N or .isra.N @@ -622,11 +649,9 @@ static int post_process_offline_probe_trace_events(struct probe_trace_event *tevs, int ntevs, const char *pathname) { - struct symbol *sym; struct map *map; unsigned long stext = 0; - u64 addr; - int i; + int i, ret = 0; /* Prepare a map for offline binary */ map = dso__new_map(pathname); @@ -636,23 +661,14 @@ post_process_offline_probe_trace_events(struct probe_trace_event *tevs, } for (i = 0; i < ntevs; i++) { - addr = tevs[i].point.address + tevs[i].point.offset - stext; - sym = map__find_symbol(map, addr); - if (!sym) - continue; - if (!strcmp(sym->name, tevs[i].point.symbol)) - continue; - /* If we have no realname, use symbol for it */ - if (!tevs[i].point.realname) - tevs[i].point.realname = tevs[i].point.symbol; - else - free(tevs[i].point.symbol); - tevs[i].point.symbol = strdup(sym->name); - tevs[i].point.offset = addr - sym->start; + ret = post_process_probe_trace_point(&tevs[i].point, + map, stext); + if (ret < 0) + break; } map__put(map); - return 0; + return ret; } static int add_exec_to_probe_trace_events(struct probe_trace_event *tevs, @@ -682,18 +698,31 @@ static int add_exec_to_probe_trace_events(struct probe_trace_event *tevs, return ret; } -static int add_module_to_probe_trace_events(struct probe_trace_event *tevs, - int ntevs, const char *module) +static int +post_process_module_probe_trace_events(struct probe_trace_event *tevs, + int ntevs, const char *module, + struct debuginfo *dinfo) { + Dwarf_Addr text_offs = 0; int i, ret = 0; char *mod_name = NULL; + struct map *map; if (!module) return 0; - mod_name = find_module_name(module); + map = get_target_map(module, false); + if (!map || debuginfo__get_text_offset(dinfo, &text_offs, true) < 0) { + pr_warning("Failed to get ELF symbols for %s\n", module); + return -EINVAL; + } + mod_name = find_module_name(module); for (i = 0; i < ntevs; i++) { + ret = post_process_probe_trace_point(&tevs[i].point, + map, (unsigned long)text_offs); + if (ret < 0) + break; tevs[i].point.module = strdup(mod_name ? mod_name : module); if (!tevs[i].point.module) { @@ -703,6 +732,8 @@ static int add_module_to_probe_trace_events(struct probe_trace_event *tevs, } free(mod_name); + map__put(map); + return ret; } @@ -760,7 +791,7 @@ arch__post_process_probe_trace_events(struct perf_probe_event *pev __maybe_unuse static int post_process_probe_trace_events(struct perf_probe_event *pev, struct probe_trace_event *tevs, int ntevs, const char *module, - bool uprobe) + bool uprobe, struct debuginfo *dinfo) { int ret; @@ -768,7 +799,8 @@ static int post_process_probe_trace_events(struct perf_probe_event *pev, ret = add_exec_to_probe_trace_events(tevs, ntevs, module); else if (module) /* Currently ref_reloc_sym based probe is not for drivers */ - ret = add_module_to_probe_trace_events(tevs, ntevs, module); + ret = post_process_module_probe_trace_events(tevs, ntevs, + module, dinfo); else ret = post_process_kernel_probe_trace_events(tevs, ntevs); @@ -812,30 +844,27 @@ static int try_to_find_probe_trace_events(struct perf_probe_event *pev, } } - debuginfo__delete(dinfo); - if (ntevs > 0) { /* Succeeded to find trace events */ pr_debug("Found %d probe_trace_events.\n", ntevs); ret = post_process_probe_trace_events(pev, *tevs, ntevs, - pev->target, pev->uprobes); + pev->target, pev->uprobes, dinfo); if (ret < 0 || ret == ntevs) { + pr_debug("Post processing failed or all events are skipped. (%d)\n", ret); clear_probe_trace_events(*tevs, ntevs); zfree(tevs); + ntevs = 0; } - if (ret != ntevs) - return ret < 0 ? ret : ntevs; - ntevs = 0; - /* Fall through */ } + debuginfo__delete(dinfo); + if (ntevs == 0) { /* No error but failed to find probe point. */ pr_warning("Probe point '%s' not found.\n", synthesize_perf_probe_point(&pev->point)); return -ENOENT; - } - /* Error path : ntevs < 0 */ - pr_debug("An error occurred in debuginfo analysis (%d).\n", ntevs); - if (ntevs < 0) { + } else if (ntevs < 0) { + /* Error path : ntevs < 0 */ + pr_debug("An error occurred in debuginfo analysis (%d).\n", ntevs); if (ntevs == -EBADF) pr_warning("Warning: No dwarf info found in the vmlinux - " "please rebuild kernel with CONFIG_DEBUG_INFO=y.\n"); diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c index df4debe564daab..0d9d6e0803b88b 100644 --- a/tools/perf/util/probe-finder.c +++ b/tools/perf/util/probe-finder.c @@ -1501,7 +1501,8 @@ int debuginfo__find_available_vars_at(struct debuginfo *dbg, } /* For the kernel module, we need a special code to get a DIE */ -static int debuginfo__get_text_offset(struct debuginfo *dbg, Dwarf_Addr *offs) +int debuginfo__get_text_offset(struct debuginfo *dbg, Dwarf_Addr *offs, + bool adjust_offset) { int n, i; Elf32_Word shndx; @@ -1530,6 +1531,8 @@ static int debuginfo__get_text_offset(struct debuginfo *dbg, Dwarf_Addr *offs) if (!shdr) return -ENOENT; *offs = shdr->sh_addr; + if (adjust_offset) + *offs -= shdr->sh_offset; } } return 0; @@ -1543,16 +1546,12 @@ int debuginfo__find_probe_point(struct debuginfo *dbg, unsigned long addr, Dwarf_Addr _addr = 0, baseaddr = 0; const char *fname = NULL, *func = NULL, *basefunc = NULL, *tmp; int baseline = 0, lineno = 0, ret = 0; - bool reloc = false; -retry: + /* We always need to relocate the address for aranges */ + if (debuginfo__get_text_offset(dbg, &baseaddr, false) == 0) + addr += baseaddr; /* Find cu die */ if (!dwarf_addrdie(dbg->dbg, (Dwarf_Addr)addr, &cudie)) { - if (!reloc && debuginfo__get_text_offset(dbg, &baseaddr) == 0) { - addr += baseaddr; - reloc = true; - goto retry; - } pr_warning("Failed to find debug information for address %lx\n", addr); ret = -EINVAL; diff --git a/tools/perf/util/probe-finder.h b/tools/perf/util/probe-finder.h index f1d8558f498e96..2956c51986529e 100644 --- a/tools/perf/util/probe-finder.h +++ b/tools/perf/util/probe-finder.h @@ -46,6 +46,9 @@ int debuginfo__find_trace_events(struct debuginfo *dbg, int debuginfo__find_probe_point(struct debuginfo *dbg, unsigned long addr, struct perf_probe_point *ppt); +int debuginfo__get_text_offset(struct debuginfo *dbg, Dwarf_Addr *offs, + bool adjust_offset); + /* Find a line range */ int debuginfo__find_line_range(struct debuginfo *dbg, struct line_range *lr);