<b>Key idea</b>
- In the \93basket\94 approach, instead of
+ In the 'basket' approach, instead of
the traditional ordered list of nodes, the queue consists of an ordered list of groups
of nodes (logical baskets). The order of nodes in each basket need not be specified, and in
fact, it is easiest to maintain them in LIFO order. The baskets fulfill the following basic
rules:
- - Each basket has a time interval in which all its nodes\92 enqueue operations overlap.
+ - Each basket has a time interval in which all its nodes' enqueue operations overlap.
- The baskets are ordered by the order of their respective time intervals.
- - For each basket, its nodes\92 dequeue operations occur after its time interval.
+ - For each basket, its nodes' dequeue operations occur after its time interval.
- The dequeue operations are performed according to the order of baskets.
Two properties define the FIFO order of nodes:
In algorithms such as the MS-queue or optimistic
queue, threads enqueue items by applying a Compare-and-swap (CAS) operation to the
- queue\92s tail pointer, and all the threads that fail on a particular CAS operation (and also
+ queue's tail pointer, and all the threads that fail on a particular CAS operation (and also
the winner of that CAS) overlap in time. In particular, they share the time interval of
the CAS operation itself. Hence, all the threads that fail to CAS on the tail-node of
the queue may be inserted into the same basket. By integrating the basket-mechanism
onto the new tail, can now be utilized to insert the failed operations into the basket,
allowing enqueues to complete sooner. In the meantime, the next successful CAS operations
by enqueues allow new baskets to be formed down the list, and these can be
- filled concurrently. Moreover, the failed operations don\92t retry their link attempt on the
+ filled concurrently. Moreover, the failed operations don't retry their link attempt on the
new tail, lowering the overall contention on it. This leads to a queue
algorithm that unlike all former concurrent queue algorithms requires virtually no tuning
of the backoff mechanisms to reduce contention, making the algorithm an attractive
public: // for ThreadGC.
/*
GCC cannot compile code for template versions of ThreadGC::allocGuard/freeGuard,
- the compiler produces error: \91cds::gc::dhp::details::guard_data* cds::gc::dhp::details::guard::m_pGuard\92 is protected
+ the compiler produces error: 'cds::gc::dhp::details::guard_data* cds::gc::dhp::details::guard::m_pGuard' is protected
despite the fact that ThreadGC is declared as friend for guard class.
Therefore, we have to add set_guard/get_guard public functions
*/
<b>Key idea</b>
- In the \93basket\94 approach, instead of
+ In the 'basket' approach, instead of
the traditional ordered list of nodes, the queue consists of an ordered list of groups
of nodes (logical baskets). The order of nodes in each basket need not be specified, and in
fact, it is easiest to maintain them in FIFO order. The baskets fulfill the following basic
rules:
- - Each basket has a time interval in which all its nodes\92 enqueue operations overlap.
+ - Each basket has a time interval in which all its nodes' enqueue operations overlap.
- The baskets are ordered by the order of their respective time intervals.
- - For each basket, its nodes\92 dequeue operations occur after its time interval.
+ - For each basket, its nodes' dequeue operations occur after its time interval.
- The dequeue operations are performed according to the order of baskets.
Two properties define the FIFO order of nodes:
In algorithms such as the MS-queue or optimistic
queue, threads enqueue items by applying a Compare-and-swap (CAS) operation to the
- queue\92s tail pointer, and all the threads that fail on a particular CAS operation (and also
+ queue's tail pointer, and all the threads that fail on a particular CAS operation (and also
the winner of that CAS) overlap in time. In particular, they share the time interval of
the CAS operation itself. Hence, all the threads that fail to CAS on the tail-node of
the queue may be inserted into the same basket. By integrating the basket-mechanism
onto the new tail, can now be utilized to insert the failed operations into the basket,
allowing enqueues to complete sooner. In the meantime, the next successful CAS operations
by enqueues allow new baskets to be formed down the list, and these can be
- filled concurrently. Moreover, the failed operations don\92t retry their link attempt on the
+ filled concurrently. Moreover, the failed operations don't retry their link attempt on the
new tail, lowering the overall contention on it. This leads to a queue
algorithm that unlike all former concurrent queue algorithms requires virtually no tuning
of the backoff mechanisms to reduce contention, making the algorithm an attractive
an allocator should be provided to maintain variable randomly-calculated height of the node
since the node can contain up to 32 next pointers. The allocator option is used to allocate an array of next pointers
for nodes which height is more than 1. Default is \ref CDS_DEFAULT_ALLOCATOR.
- - \p opt::back_off - back-off strategy, default is \ç cds::backoff::Default.
+ - \p opt::back_off - back-off strategy, default is \p cds::backoff::Default.
- \p opt::stat - internal statistics. By default, it is disabled (\p skip_list::empty_stat).
To enable it use \p skip_list::stat
*/
"Formal Verification of a practical lock-free queue algorithm"
Cite from this work about difference from Michael & Scott algo:
- "Our algorithm differs from Michael and Scott\92s [MS98] in that we test whether \p Tail points to the header
+ "Our algorithm differs from Michael and Scott's [MS98] in that we test whether \p Tail points to the header
node only <b>after</b> \p Head has been updated, so a dequeuing process reads \p Tail only once. The dequeue in
[MS98] performs this test before checking whether the next pointer in the dummy node is null, which
means that it reads \p Tail every time a dequeuing process loops. Under high load, when operations retry
frequently, our modification will reduce the number of accesses to global memory. This modification, however,
- introduces the possibility of \p Head and \p Tail \93crossing\94."
+ introduces the possibility of \p Head and \p Tail 'crossing'."
Explanation of template arguments see intrusive::MSQueue.
[from [2003] Ori Shalev, Nir Shavit "Split-Ordered Lists - Lock-free Resizable Hash Tables"]
The algorithm keeps all the items in one lock-free linked list, and gradually assigns the bucket pointers to
- the places in the list where a sublist of \93correct\94 items can be found. A bucket is initialized upon first
- access by assigning it to a new \93dummy\94 node (dashed contour) in the list, preceding all items that should be
- in that bucket. A newly created bucket splits an older bucket\92s chain, reducing the access cost to its items. The
- table uses a modulo 2**i hash (there are known techniques for \93pre-hashing\94 before a modulo 2**i hash
+ the places in the list where a sublist of 'correct' items can be found. A bucket is initialized upon first
+ access by assigning it to a new 'dummy' node (dashed contour) in the list, preceding all items that should be
+ in that bucket. A newly created bucket splits an older bucket's chain, reducing the access cost to its items. The
+ table uses a modulo 2**i hash (there are known techniques for 'pre-hashing' before a modulo 2**i hash
to overcome possible binary correlations among values). The table starts at size 2 and repeatedly doubles in size.
Unlike moving an item, the operation of directing a bucket pointer can be done
- in a single CAS operation, and since items are not moved, they are never \93lost\94.
+ in a single CAS operation, and since items are not moved, they are never 'lost'.
However, to make this approach work, one must be able to keep the items in the
- list sorted in such a way that any bucket\92s sublist can be \93split\94 by directing a new
+ list sorted in such a way that any bucket's sublist can be 'split' by directing a new
bucket pointer within it. This operation must be recursively repeatable, as every
split bucket may be split again and again as the hash table grows. To achieve this
goal the authors introduced recursive split-ordering, a new ordering on keys that keeps items
in a given bucket adjacent in the list throughout the repeated splitting process.
Magically, yet perhaps not surprisingly, recursive split-ordering is achieved by
- simple binary reversal: reversing the bits of the hash key so that the new key\92s
+ simple binary reversal: reversing the bits of the hash key so that the new key's
most significant bits (MSB) are those that were originally its least significant.
The split-order keys of regular nodes are exactly the bit-reverse image of the original
keys after turning on their MSB. For example, items 9 and 13 are in the <tt>1 mod
To insert (respectively delete or search for) an item in the hash table, hash its
key to the appropriate bucket using recursive split-ordering, follow the pointer to
- the appropriate location in the sorted items list, and traverse the list until the key\92s
+ the appropriate location in the sorted items list, and traverse the list until the key's
proper location in the split-ordering (respectively until the key or a key indicating
the item is not in the list is found). Because of the combinatorial structure induced
by the split-ordering, this will require traversal of no more than an expected constant number of items.
The library does not dictate any thread model. To embed the library to your application you should choose
appropriate implementation of \p cds::threading::Manager interface
or should provide yourself.
- The \p %cds::threading::Manager interface manages \p ñds::threading::ThreadData structure that contains GC's thread specific data.
+ The \p %cds::threading::Manager interface manages \p cds::threading::ThreadData structure that contains GC's thread specific data.
Any \p cds::threading::Manager implementation is a singleton and it must be accessible from any thread and from any point of
your application. Note that you should not mix different implementation of the \p cds::threading::Manager in your application.
the best possible read-side performance, but requires that each thread periodically
calls a function to announce that it is in a quiescent state, thus strongly
constraining the application design. This type of %RCU is not implemented in \p libcds.
- - The general-purpose %RCU implementation places almost no constraints on the application\92s
+ - The general-purpose %RCU implementation places almost no constraints on the application's
design, thus being appropriate for use within a general-purpose library, but it has
relatively higher read-side overhead. The \p libcds contains several implementations of general-purpose
%RCU: \ref general_instant, \ref general_buffered, \ref general_threaded.