/**
* AtomicHashArray is the building block for AtomicHashMap. It provides the
- * core lock-free functionality, but is limitted by the fact that it cannot
- * grow past it's initialization size and is a little more awkward (no public
+ * core lock-free functionality, but is limited by the fact that it cannot
+ * grow past its initialization size and is a little more awkward (no public
* constructor, for example). If you're confident that you won't run out of
* space, don't mind the awkardness, and really need bare-metal performance,
* feel free to use AHA directly.
/*
* AtomicHashMap --
*
- * A high performance concurrent hash map with int32 or int64 keys. Supports
+ * A high-performance concurrent hash map with int32 or int64 keys. Supports
* insert, find(key), findAt(index), erase(key), size, and more. Memory cannot
* be freed or reclaimed by erase. Can grow to a maximum of about 18 times the
* initial capacity, but performance degrades linearly with growth. Can also be
* internal storage (retrieved with iterator::getIndex()).
*
* Advantages:
- * - High performance (~2-4x tbb::concurrent_hash_map in heavily
+ * - High-performance (~2-4x tbb::concurrent_hash_map in heavily
* multi-threaded environments).
* - Efficient memory usage if initial capacity is not over estimated
* (especially for small keys and values).
* faster because of reduced data indirection.
*
* AHMap is a wrapper around AHArray sub-maps that allows growth and provides
- * an interface closer to the stl UnorderedAssociativeContainer concept. These
+ * an interface closer to the STL UnorderedAssociativeContainer concept. These
* sub-maps are allocated on the fly and are processed in series, so the more
* there are (from growing past initial capacity), the worse the performance.
*
//
// If you have observed by profiling that your SharedMutex-s are getting
// cache misses on deferredReaders[] due to another SharedMutex user, then
-// you can use the tag type plus the RWDEFERREDLOCK_DECLARE_STATIC_STORAGE
-// macro to create your own instantiation of the type. The contention
-// threshold (see kNumSharedToStartDeferring) should make this unnecessary
-// in all but the most extreme cases. Make sure to check that the
-// increased icache and dcache footprint of the tagged result is worth it.
+// you can use the tag type to create your own instantiation of the type.
+// The contention threshold (see kNumSharedToStartDeferring) should make
+// this unnecessary in all but the most extreme cases. Make sure to check
+// that the increased icache and dcache footprint of the tagged result is
+// worth it.
// SharedMutex's use of thread local storage is as an optimization, so
// for the case where thread local storage is not supported, define it