4 `folly/Synchronized.h` introduces a simple abstraction for mutex-
5 based concurrency. It replaces convoluted, unwieldy, and just
6 plain wrong code with simple constructs that are easy to get
7 right and difficult to get wrong.
11 Many of our multithreaded Thrift services (not to mention general
12 concurrent C++ code) use shared data structures associated with
13 locks. This follows the time-honored adage of mutex-based
14 concurrency control "associate mutexes with data, not code".
15 Examples are abundant and easy to find. For example:
19 class AdPublisherHandler : public AdPopulatorIf,
20 public fb303::FacebookBase,
21 public ZkBaseApplication {
23 OnDemandUpdateIdMap adsToBeUpdated_;
24 ReadWriteMutex adsToBeUpdatedLock_;
26 OnDemandUpdateIdMap limitsToBeUpdated_;
27 ReadWriteMutex limitsToBeUpdatedLock_;
29 OnDemandUpdateIdMap campaignsToBeUpdated_;
30 ReadWriteMutex campaignsToBeUpdatedLock_;
35 Whenever the code needs to read or write some of the protected
36 data, it acquires the mutex for reading or for reading and
40 void AdPublisherHandler::requestUpdateAdId(const int64_t adId,
42 checkDbHandlingStatus(dbId);
43 RWGuard g(adsToBeUpdatedLock_, RW_WRITE);
44 adsToBeUpdated_[dbId][adId] = 1;
45 adPublisherMonitor_->addStatValue("request_adId_update", 1, dbId);
46 LOG(INFO) << "received request to update ad id " << adId;
50 The pattern is an absolute classic and present everywhere.
51 However, it is inefficient, makes incorrect code easy to
52 write, is prone to deadlocking, and is bulkier than it could
53 otherwise be. To expand:
55 * In the code above, for example, the critical section is only
56 the line right after `RWGuard`'s definition; it is frivolous
57 that everything else (including a splurging `LOG(INFO)`) keeps
58 the lock acquired for no good reason. This is because the
59 locked regions are not visible; the guard's construction
60 introduces a critical section as long as the remainder of the
62 * The correctness of the technique is entirely predicated on
63 convention. There is no ostensible error for code that:
65 * manipulates a piece of data without acquiring its lock first
66 * acquires a different lock instead of the intended one
67 * acquires a lock in read mode but modifies the guarded data structure
68 * acquires a lock in read-write mode although it only has `const`
69 access to the guarded data
70 * acquires one lock when another lock is already held, which may
71 lead to deadlocks if another thread acquires locks in the
74 ### Introduction to `folly/Synchronized.h`
76 The same code sample could be rewritten with `Synchronized`
80 class AdPublisherHandler : public AdPopulatorIf,
81 public fb303::FacebookBase,
82 public ZkBaseApplication {
84 Synchronized<OnDemandUpdateIdMap>
87 campaignsToBeUpdated_;
91 void AdPublisherHandler::requestUpdateAdId(const int64_t adId,
93 checkDbHandlingStatus(dbId);
94 SYNCHRONIZED (adsToBeUpdated_) {
95 adsToBeUpdated_[dbId][adId] = 1;
97 adPublisherMonitor_->addStatValue("request_adId_update", 1, dbId);
98 LOG(INFO) << "received request to update ad id " << adId;
102 The rewrite does at maximum efficiency what needs to be done:
103 acquires the lock associated with the `OnDemandUpdateIdMap`
104 object, writes to the map, and releases the lock immediately
107 On the face of it, that's not much to write home about, and not
108 an obvious improvement over the previous state of affairs. But
109 the features at work invisible in the code above are as important
110 as those that are visible:
112 * Unlike before, the data and the mutex protecting it are
113 inextricably encapsulated together.
114 * Critical sections are readily visible and emphasize code that
115 needs to do minimal work and be subject to extra scrutiny.
116 * Dangerous nested `SYNCHRONIZED` statements are more visible
117 than sequenced declarations of guards at the same level. (This
118 is not foolproof because a method call issued inside a
119 `SYNCHRONIZED` scope may open its own `SYNCHRONIZED` block.) A
120 construct `SYNCHRONIZED_DUAL`, discussed later in this
121 document, allows locking two objects quasi-simultaneously in
122 the same order in all threads, thus avoiding deadlocks.
123 * If you tried to use `adsToBeUpdated_` outside the
124 `SYNCHRONIZED` scope, you wouldn't be able to; it is virtually
125 impossible to tease the map object without acquiring the
126 correct lock. However, inside the `SYNCHRONIZED` scope, the
127 *same* name serves as the actual underlying object of type
128 `OnDemandUpdateIdMap` (which is a map of maps).
129 * Outside `SYNCHRONIZED`, if you just want to call one
130 method, you can do so by using `adsToBeUpdated_` as a
133 `adsToBeUpdated_->clear();`
135 This acquires the mutex, calls `clear()` against the underlying
136 map object, and releases the mutex immediately thereafter.
138 `Synchronized` offers several other methods, which are described
141 ### Template class `Synchronized<T>`
145 The default constructor default-initializes the data and its
149 The copy constructor locks the source for reading and copies its
150 data into the target. (The target is not locked as an object
151 under construction is only accessed by one thread.)
153 Finally, `Synchronized<T>` defines an explicit constructor that
154 takes an object of type `T` and copies it. For example:
157 // Default constructed
158 Synchronized< map<string, int> > syncMap1;
161 Synchronized< map<string, int> > syncMap2(syncMap1);
163 // Initializing from an existing map
164 map<string, int> init;
166 Synchronized< map<string, int> > syncMap3(init);
167 EXPECT_EQ(syncMap3->size(), 1);
170 #### Assignment, swap, and copying
172 The canonical assignment operator locks both objects involved and
173 then copies the underlying data objects. The mutexes are not
174 copied. The locks are acquired in increasing address order, so
175 deadlock is avoided. For example, there is no problem if one
176 thread assigns `a = b` and the other assigns `b = a` (other than
177 that design probably deserving a Razzie award). Similarly, the
178 `swap` method takes a reference to another `Synchronized<T>`
179 object and swaps the data. Again, locks are acquired in a well-
180 defined order. The mutexes are not swapped.
182 An additional assignment operator accepts a `const T&` on the
183 right-hand side. The operator copies the datum inside a
186 An additional `swap` method accepts a `T&` and swaps the data
187 inside a critical section. This is by far the preferred method of
188 changing the guarded datum wholesale because it keeps the lock
189 only for a short time, thus lowering the pressure on the mutex.
191 To get a copy of the guarded data, there are two methods
192 available: `void copy(T*)` and `T copy()`. The first copies data
193 to a provided target and the second returns a copy by value. Both
194 operations are done under a read lock. Example:
197 Synchronized< fbvector<fbstring> > syncVec1, syncVec2;
198 fbvector<fbstring> vec;
202 // Assign straight from vector
206 syncVec1.swap(syncVec2);
210 // Copy to given target
212 // Get a copy by value
213 auto copy = syncVec1.copy();
216 #### `LockedPtr operator->()` and `ConstLockedPtr operator->() const`
218 We've already seen `operator->` at work. Essentially calling a
219 method `obj->foo(x, y, z)` calls the method `foo(x, y, z)` inside
220 a critical section as long-lived as the call itself. For example:
223 void fun(Synchronized< fbvector<fbstring> > & vec) {
224 vec->push_back("hello");
225 vec->push_back("world");
229 The code above appends two elements to `vec`, but the elements
230 won't appear necessarily one after another. This is because in
231 between the two calls the mutex is released, and another thread
232 may modify the vector. At the cost of anticipating a little, if
233 you want to make sure you insert "world" right after "hello", you
237 void fun(Synchronized< fbvector<fbstring> > & vec) {
239 vec.push_back("hello");
240 vec.push_back("world");
245 This brings us to a cautionary discussion. The way `operator->`
246 works is rather ingenious with creating an unnamed temporary that
247 enforces locking and all, but it's not a panacea. Between two
248 uses of `operator->`, other threads may change the synchronized
249 object in arbitrary ways, so you shouldn't assume any sort of
250 sequential consistency. For example, the innocent-looking code
251 below may be patently wrong.
253 If another thread clears the vector in between the call to
254 `empty` and the call to `pop_back`, this code ends up attempting
255 to extract an element from an empty vector. Needless to say,
260 FOR_EACH_RANGE (i, vec->begin(), vec->end()) {
265 is a crime punishable by long debugging nights.
267 If the `Synchronized<T>` object involved is `const`-qualified,
268 then you'll only be able to call `const` methods through `operator->`.
269 So, for example, `vec->push_back("xyz")` won't work if `vec`
270 were `const`-qualified. The locking mechanism capitalizes on the
271 assumption that `const` methods don't modify their underlying
272 data and only acquires a read lock (as opposed to a read and
273 write lock), which is cheaper but works only if the immutability
274 assumption holds. Note that this is strictly not the case because
275 `const`-ness can always be undone via `mutable` members, casts,
276 and surreptitious access to shared data. Our code is seldom
277 guilty of such, and we also assume the STL uses no shenanigans.
285 void fun(Synchronized<fbvector<fbstring>> & vec) {
286 if (vec->size() > 1000000) {
287 LOG(WARNING) << "The blinkenlights are overloaded.";
289 vec->push_back("another blinkenlight");
293 This code is correct (at least according to a trivial intent),
294 but less efficient than it could otherwise be. This is because
295 the call `vec->size()` acquires a full read-write lock, but only
296 needs a read lock. We need to help the type system here by
297 telling it "even though `vec` is a mutable object, consider it a
298 constant for this call". This should be easy enough because
299 conversion to const is trivial - just issue `const_cast<const
300 Synchronized<fbvector<fbstring>>&>(vec)`. Ouch. To make that
301 operation simpler - a lot simpler - `Synchronized<T>` defines the
302 method `asConst()`, which is a glorious one-liner. With `asConst`
303 in tow, it's very easy to achieve what we wanted:
306 void fun(Synchronized<fbvector<fbstring>> & vec) {
307 if (vec.asConst()->size() > 1000000) {
308 LOG(WARNING) << "The blinkenlights are overloaded.";
310 vec->push_back("another blinkenlight");
314 QED (Quite Easy Done). This concludes the documentation for
319 The `SYNCHRONIZED` macro introduces a pseudo-statement that adds
320 a whole new level of usability to `Synchronized<T>`. As
321 discussed, `operator->` can only lock over the duration of a
322 call, so it is insufficient for complex operations. With
323 `SYNCHRONIZED` you get to lock the object in a scoped manner (not
324 unlike Java's `synchronized` statement) and to directly access
325 the object inside that scope.
327 `SYNCHRONIZED` has two forms. We've seen the first one a couple
331 void fun(Synchronized<fbvector<int>> & vec) {
334 CHECK(vec.back() == 42);
340 The scope introduced by `SYNCHRONIZED` is a critical section
341 guarded by `vec`'s mutex. In addition to doing that,
342 `SYNCHRONIZED` also does an interesting sleight of hand: it binds
343 the name `vec` inside the scope to the underlying `fbvector<int>`
344 object - as opposed to `vec`'s normal type, which is
345 `Synchronized<fbvector<int>>`. This fits very nice the "form
346 follow function" - inside the critical section you have earned
347 access to the actual data, and the name bindings reflect that as
348 well. `SYNCHRONIZED(xyz)` essentially cracks `xyz` temporarily
349 and gives you access to its innards.
351 Now, what if `fun` wants to take a pointer to
352 `Synchronized<fbvector<int>>` - let's call it `pvec`? Generally,
353 what if we want to synchronize on an expression as opposed to a
354 symbolic variable? In that case `SYNCHRONIZED(*pvec)` would not
355 work because "`*pvec`" is not a name. That's where the second
356 form of `SYNCHRONIZED` kicks in:
359 void fun(Synchronized<fbvector<int>> * pvec) {
360 SYNCHRONIZED (vec, *pvec) {
362 CHECK(vec.back() == 42);
368 Ha, so now we pass two arguments to `SYNCHRONIZED`. The first
369 argument is the name bound to the data, and the second argument
370 is the expression referring to the `Synchronized<T>` object. So
371 all cases are covered.
373 ### `SYNCHRONIZED_CONST`
375 Recall from the discussion about `asConst()` that we
376 sometimes want to voluntarily restrict access to an otherwise
377 mutable object. The `SYNCHRONIZED_CONST` pseudo-statement
378 makes that intent easily realizable and visible to
379 maintainers. For example:
382 void fun(Synchronized<fbvector<int>> & vec) {
384 SYNCHRONIZED_CONST (vec) {
385 CHECK(vec.size() > 42);
395 Inside a `SYNCHRONIZED_CONST(xyz)` scope, `xyz` is bound to a `const`-
396 qualified datum. The corresponding lock is a read lock.
398 `SYNCHRONIZED_CONST` also has a two-arguments version, just like
399 `SYNCHRONIZED`. In fact, `SYNCHRONIZED_CONST(a)` simply expands
400 to `SYNCHRONIZED(a, a.asConst())` and `SYNCHRONIZED_CONST(a, b)`
401 expands to `SYNCHRONIZED(a, (b).asConst())`. The type system and
402 `SYNCHRONIZED` take care of the rest.
404 ### `TIMED_SYNCHRONIZED` and `TIMED_SYNCHRONIZED_CONST`
406 These pseudo-statements allow you to acquire the mutex with a
410 void fun(Synchronized<fbvector<int>> & vec) {
411 TIMED_SYNCHRONIZED (10, vec) {
414 CHECK(vec->back() == 42);
416 LOG(INFO) << "Dognabbit, I've been waiting over here for 10 milliseconds and couldn't get through!";
422 If the mutex acquisition was successful within a number of
423 milliseconds dictated by its first argument, `TIMED_SYNCHRONIZED`
424 binds its second argument to a pointer to the protected object.
425 Otherwise, the pointer will be `NULL`. (Contrast that with
426 `SYNCHRONIZED`), which always succeeds so it binds the protected
427 object to a reference.) Inside the `TIMED_SYNCHRONIZED` statement
428 you must, of course, make sure the pointer is not null to make
429 sure the operation didn't time out.
431 `TIMED_SYNCHRONIZED` takes two or three parameters. The first is
432 always the timeout, and the remaining one or two are just like
433 the parameters of `SYNCHRONIZED`.
435 Issuing `TIMED_SYNCHRONIZED` with a zero timeout is an
436 opportunistic attempt to acquire the mutex.
440 `SYNCHRONIZED` is a good mechanism for enforcing scoped
441 synchronization, but it has the inherent limitation that it
442 requires the critical section to be, well, scoped. Sometimes the
443 code structure requires a fleeting "escape" from the iron fist of
444 synchronization. Clearly, simple cases are handled with sequenced
445 `SYNCHRONIZED` scopes:
448 Synchronized<map<int, string>> dic;
451 if (dic.find(0) != dic.end()) {
455 LOG(INFO) << "Key 0 not found, inserting it."
461 For more complex, nested flow control, you may want to use the
462 `UNSYNCHRONIZED` macro. It (only) works inside a `SYNCHRONIZED`
463 pseudo-statement and temporarily unlocks the mutex:
467 Synchronized<map<int, string>> dic;
470 auto i = dic.find(0);
471 if (i != dic.end()) {
472 UNSYNCHRONIZED (dic) {
473 LOG(INFO) << "Key 0 not found, inserting it."
480 LOG(INFO) << "Key 0 not found, inserting it."
486 Clearly `UNSYNCHRONIZED` comes with specific caveats and
487 liabilities. You must assume that during the `UNSYNCHRONIZED`
488 section, other threads might have changed the protected structure
489 in arbitrary ways. In the example above, you cannot use the
490 iterator `i` and you cannot assume that the key `0` is not in the
491 map; another thread might have inserted it while you were
492 bragging on `LOG(INFO)`.
494 ### `SYNCHRONIZED_DUAL`
496 Sometimes locking just one object won't be able to cut the mustard. Consider a
497 function that needs to lock two `Synchronized` objects at the
498 same time - for example, to copy some data from one to the other.
499 At first sight, it looks like nested `SYNCHRONIZED` statements
503 void fun(Synchronized<fbvector<int>> & a, Synchronized<fbvector<int>> & b) {
512 This code compiles and may even run most of the time, but embeds
513 a deadly peril: if one threads call `fun(x, y)` and another
514 thread calls `fun(y, x)`, then the two threads are liable to
515 deadlocking as each thread will be waiting for a lock the other
516 is holding. This issue is a classic that applies regardless of
517 the fact the objects involved have the same type.
519 This classic problem has a classic solution: all threads must
520 acquire locks in the same order. The actual order is not
521 important, just the fact that the order is the same in all
522 threads. Many libraries simply acquire mutexes in increasing
523 order of their address, which is what we'll do, too. The pseudo-
524 statement `SYNCHRONIZED_DUAL` takes care of all details of proper
525 locking of two objects and offering their innards:
528 void fun(Synchronized<fbvector<int>> & a, Synchronized<fbvector<int>> & b) {
529 SYNCHRONIZED_DUAL (myA, a, myB, b) {
530 ... use myA and myB ...
535 To avoid potential confusions, `SYNCHRONIZED_DUAL` only defines a
536 four-arguments version. The code above locks `a` and `b` in
537 increasing order of their address and offers their data under the
538 names `myA` and `myB`, respectively.
540 ### Synchronizing several data items with one mutex
542 The library is geared at protecting one object of a given type
543 with a mutex. However, sometimes we'd like to protect two or more
544 members with the same mutex. Consider for example a bidirectional
545 map, i.e. a map that holds an `int` to `string` mapping and also
546 the converse `string` to `int` mapping. The two maps would need
547 to be manipulated simultaneously. There are at least two designs
550 #### Using a nested `struct`
552 You can easily pack the needed data items in a little struct.
558 map<int, string> direct;
559 map<string, int> inverse;
561 Synchronized<BiMap> bimap_;
565 SYNCHRONIZED (bymap_) {
566 bymap_.direct[0] = "zero";
567 bymap_.inverse["zero"] = 0;
571 With this code in tow you get to use `bimap_` just like any other
572 `Synchronized` object, without much effort.
574 #### Using `std::tuple`
576 If you won't stop short of using a spaceship-era approach,
577 `std::tuple` is there for you. The example above could be
578 rewritten for the same functionality like this:
582 Synchronized<tuple<map<int, string>, map<string, int>>> bimap_;
586 SYNCHRONIZED (bymap_) {
587 get<0>(bymap_)[0] = "zero";
588 get<1>(bymap_)["zero"] = 0;
592 The code uses `std::get` with compile-time integers to access the
593 fields in the tuple. The relative advantages and disadvantages of
594 using a local struct vs. `std::tuple` are quite obvious - in the
595 first case you need to invest in the definition, in the second
596 case you need to put up with slightly more verbose and less clear
601 `Synchronized` and its supporting tools offer you a simple,
602 robust paradigm for mutual exclusion-based concurrency. Instead
603 of manually pairing data with the mutexes that protect it and
604 relying on convention to use them appropriately, you can benefit
605 of encapsulation and typechecking to offload a large part of that
606 task and to provide good guarantees.