From: Paul E. McKenney Date: Fri, 16 Nov 2012 17:59:58 +0000 (-0800) Subject: Merge branches 'urgent.2012.10.27a', 'doc.2012.11.16a', 'fixes.2012.11.13a', 'srcu... X-Git-Tag: firefly_0821_release~3680^2~1514^2^2~4 X-Git-Url: http://demsky.eecs.uci.edu/git/?a=commitdiff_plain;h=aac1cda34b84a9411d6b8d18c3658f094c834911;p=firefly-linux-kernel-4.4.55.git Merge branches 'urgent.2012.10.27a', 'doc.2012.11.16a', 'fixes.2012.11.13a', 'srcu.2012.10.27a', 'stall.2012.11.13a', 'tracing.2012.11.08a' and 'idle.2012.10.24a' into HEAD urgent.2012.10.27a: Fix for RCU user-mode transition (already in -tip). doc.2012.11.08a: Documentation updates, most notably codifying the memory-barrier guarantees inherent to grace periods. fixes.2012.11.13a: Miscellaneous fixes. srcu.2012.10.27a: Allow statically allocated and initialized srcu_struct structures (courtesy of Lai Jiangshan). stall.2012.11.13a: Add more diagnostic information to RCU CPU stall warnings, also decrease from 60 seconds to 21 seconds. hotplug.2012.11.08a: Minor updates to CPU hotplug handling. tracing.2012.11.08a: Improved debugfs tracing, courtesy of Michael Wang. idle.2012.10.24a: Updates to RCU idle/adaptive-idle handling, including a boot parameter that maps normal grace periods to expedited. Resolved conflict in kernel/rcutree.c due to side-by-side change. --- aac1cda34b84a9411d6b8d18c3658f094c834911 diff --cc kernel/rcutree.c index 74df86bd9204,74df86bd9204,15a2beec320f,74df86bd9204,24b21cba2cc8,8ed9c481db03,effd47a54b36..5ffadcc3bb26 --- a/kernel/rcutree.c +++ b/kernel/rcutree.c @@@@@@@@ -2308,10 -2308,10 -2352,10 -2308,10 -2340,10 -2305,32 -2314,10 +2387,32 @@@@@@@@ static int synchronize_sched_expedited_ */ void synchronize_sched_expedited(void) { ----- - int firstsnap, s, snap, trycount = 0; +++++ + long firstsnap, s, snap; +++++ + int trycount = 0; +++++ + struct rcu_state *rsp = &rcu_sched_state; ++++ - - /* Note that atomic_inc_return() implies full memory barrier. */ - - firstsnap = snap = atomic_inc_return(&sync_sched_expedited_started); +++++ + /* +++++ + * If we are in danger of counter wrap, just do synchronize_sched(). +++++ + * By allowing sync_sched_expedited_started to advance no more than +++++ + * ULONG_MAX/8 ahead of sync_sched_expedited_done, we are ensuring +++++ + * that more than 3.5 billion CPUs would be required to force a +++++ + * counter wrap on a 32-bit system. Quite a few more CPUs would of +++++ + * course be required on a 64-bit system. +++++ + */ +++++ + if (ULONG_CMP_GE((ulong)atomic_long_read(&rsp->expedited_start), +++++ + (ulong)atomic_long_read(&rsp->expedited_done) + +++++ + ULONG_MAX / 8)) { +++++ + synchronize_sched(); +++++ + atomic_long_inc(&rsp->expedited_wrap); +++++ + return; +++++ + } + + ---- /* Note that atomic_inc_return() implies full memory barrier. */ ---- firstsnap = snap = atomic_inc_return(&sync_sched_expedited_started); +++++ + /* +++++ + * Take a ticket. Note that atomic_inc_return() implies a +++++ + * full memory barrier. +++++ + */ +++++ + snap = atomic_long_inc_return(&rsp->expedited_start); +++++ + firstsnap = snap; get_online_cpus(); WARN_ON_ONCE(cpu_is_offline(raw_smp_processor_id())); @@@@@@@@ -2328,7 -2328,7 -2372,7 -2328,7 -2360,7 -2357,8 -2334,7 +2439,8 @@@@@@@@ if (trycount++ < 10) { udelay(trycount * num_online_cpus()); } else { ------ synchronize_sched(); ++++++ wait_rcu_gp(call_rcu_sched); +++++ + atomic_long_inc(&rsp->expedited_normal); return; }