From: Nathan Bronson Date: Fri, 16 Jan 2015 23:44:39 +0000 (-0800) Subject: fix and improve MPMCQueue comment X-Git-Tag: v0.23.0~40 X-Git-Url: http://demsky.eecs.uci.edu/git/?a=commitdiff_plain;h=62030871882d2218783a2cdd5fd2fc2999c53f10;p=folly.git fix and improve MPMCQueue comment Summary: The big comment block at the top of MPMCQueue had a non-sense sentence resulting from a merge conflict, and also didn't include details about ordering or recommended patterns for handling consumer shutdown. Test Plan: comment change Reviewed By: lesha@fb.com Subscribers: folly-diffs@ FB internal diff: D1788230 Signature: t1:1788230:1421451643:7c768c2b127499e3f2b06ee241fba851b4ed2a29 --- diff --git a/folly/MPMCQueue.h b/folly/MPMCQueue.h index 9e3b88eb..6f28305f 100644 --- a/folly/MPMCQueue.h +++ b/folly/MPMCQueue.h @@ -46,6 +46,14 @@ template class MPMCPipelineStageImpl; /// up front. The bulk of the work of enqueuing and dequeuing can be /// performed in parallel. /// +/// MPMCQueue is linearizable. That means that if a call to write(A) +/// returns before a call to write(B) begins, then A will definitely end up +/// in the queue before B, and if a call to read(X) returns before a call +/// to read(Y) is started, that X will be something from earlier in the +/// queue than Y. This also means that if a read call returns a value, you +/// can be sure that all previous elements of the queue have been assigned +/// a reader (that reader might not yet have returned, but it exists). +/// /// The underlying implementation uses a ticket dispenser for the head and /// the tail, spreading accesses across N single-element queues to produce /// a queue with capacity N. The ticket dispensers use atomic increment, @@ -57,8 +65,7 @@ template class MPMCPipelineStageImpl; /// when the MPMCQueue's capacity is smaller than the number of enqueuers /// or dequeuers). /// -/// NOEXCEPT INTERACTION: Ticket-based queues separate the assignment -/// of In benchmarks (contained in tao/queues/ConcurrentQueueTests) +/// In benchmarks (contained in tao/queues/ConcurrentQueueTests) /// it handles 1 to 1, 1 to N, N to 1, and N to M thread counts better /// than any of the alternatives present in fbcode, for both small (~10) /// and large capacities. In these benchmarks it is also faster than @@ -67,17 +74,25 @@ template class MPMCPipelineStageImpl; /// queue because it uses futex() to block and unblock waiting threads, /// rather than spinning with sched_yield. /// -/// queue positions from the actual construction of the in-queue elements, -/// which means that the T constructor used during enqueue must not throw -/// an exception. This is enforced at compile time using type traits, -/// which requires that T be adorned with accurate noexcept information. -/// If your type does not use noexcept, you will have to wrap it in -/// something that provides the guarantee. We provide an alternate -/// safe implementation for types that don't use noexcept but that are -/// marked folly::IsRelocatable and boost::has_nothrow_constructor, -/// which is common for folly types. In particular, if you can declare -/// FOLLY_ASSUME_FBVECTOR_COMPATIBLE then your type can be put in -/// MPMCQueue. +/// NOEXCEPT INTERACTION: tl;dr; If it compiles you're fine. Ticket-based +/// queues separate the assignment of queue positions from the actual +/// construction of the in-queue elements, which means that the T +/// constructor used during enqueue must not throw an exception. This is +/// enforced at compile time using type traits, which requires that T be +/// adorned with accurate noexcept information. If your type does not +/// use noexcept, you will have to wrap it in something that provides +/// the guarantee. We provide an alternate safe implementation for types +/// that don't use noexcept but that are marked folly::IsRelocatable +/// and boost::has_nothrow_constructor, which is common for folly types. +/// In particular, if you can declare FOLLY_ASSUME_FBVECTOR_COMPATIBLE +/// then your type can be put in MPMCQueue. +/// +/// If you have a pool of N queue consumers that you want to shut down +/// after the queue has drained, one way is to enqueue N sentinel values +/// to the queue. If the producer doesn't know how many consumers there +/// are you can enqueue one sentinel and then have each consumer requeue +/// two sentinels after it receives it (by requeuing 2 the shutdown can +/// complete in O(log P) time instead of O(P)). template class Atom = std::atomic> class MPMCQueue : boost::noncopyable {