Modification to the current specifications.
-1. Local State --- Beside of having one global sequential state (the one that we
-had in the old approach), we also allow each method call to have a local state.
-This local state is the accumulative side effects of the subset of method calls
-that happen before the current method call.
-To evaluate the local state of each method call, an obvious approach is
-calculate from the initial state for each. An optimization we can make here is
-that we can start to evaluate the state from the most recent common parent. And
-such a state is only evaluated when needed.
-
-2. CONCURRENT --- an operation that extracts all the methods that are executing
-"concurrently" with the current method.
-"before" the current method.
-
-3. PREV --- an operation that extracts all the methods that execute right
-
-1. The register file examples.
+I. Order
+-- Sequential order (SO): Some total order that is consistent with the union of
+happens-before and SC relation.
+
+II. State
+1. Global state: We allow users to specify a single global state so that when we
+want to execute the sequential replay (execute the sequential order), the whole
+process is similar to executing an sequential program. Such a global state is
+similiar to the internal state of a sequential data structure. We also have this
+in our old version (the rejection of PLDI'16). As an analogy to the cache-memory
+model, the global state we define here is similar to the main memory in the
+sense that there does not exist a real total order to all memory accesses, but
+to some degree (with the non-deterministic specifications) we can have an
+illution that there is some total order.
+
+2. Local State: Beside of having one global state (the one that we have in the
+old approach), we also allow each method call to have a local state. This local
+state is the accumulative side effects of the subset of method calls that happen
+before the current method call. As an analogy to the cache-memory model, the
+local state we define here is similar to cache, a local state is local to the
+sense that the current method call must have seen those effects. The main goal
+of having this is to support tighter non-deterministic specifications.
+
+To evaluate the local state of each method call, an obvious approach is to
+execute the subset of methods that happen before the current method in the
+sequential order from the initial state. A optimization we can make is that we
+start to evaluate the state from the most recent deviding node which every other
+node in that subset is either hb before or after. Also, since local states are
+not required in specifications all the time, it is only evaluated when needed.
+
+III. Specifications
+Our specification language supports using the following primitives to access
+both global state and local state so that users can use those to write
+specifications with different level of tightness.
+
+To support tighter specifications, we introduce the concept of concurrent set of
+method calls, meaning that for a specific method call, it can basically see the
+effect of two different categories of method calls --- one that happens before
+it, and one that concurrently execute with it. It is worth noting that when two
+two method calls execute concurrently, in general there can be the following two
+effects: 1) those concurrent methods can happen in either order, and the final
+result remains the same. A concurrent FIFO is an example, in which concurrent
+enqueue and dequeue methods can happen in a random order; and 2) the order of
+those concurrent methods will affect the final result. The C/C++11 atomics is an
+example, in which when concurrent stores to the same location execute in
+different order, a later store will have different result.
+
+1. CONCURRENT: This primitive extracts all the methods that executes
+"concurrently" with the current method --- neither hb/SC before nor after the
+current method --- and returns as a set. It is worth noting that the concurrent
+method calls cannot be accessed for calculating local state but only for
+assertions.
+
+2. PREV: This primitive extracts all the methods that execute right before the
+current method in the execution graph --- most recent method calls that are
+hb/SC before the current method call --- and returns as a set. For each method
+in this set, the current method's specification can access their local state.
+
+3. LOCAL: This primitive allows users to access the local state of a method
+call. It is worth noting that in order to calculate the local state of a method
+call, one can only access the set of method calls that happen before the current
+method call.
+
+Our specifications allow two ways of calculating the local states, a default way
+and a user-customized way. The default approach is to execute the set of method
+calls that happen before the current method call in the sequential order, and a
+the user-customized approach supports users to calculate the local state by
+using the PREV primitive to access the local state of previous method calls.
+
+4. COPY: This is the function that users provide to deep-copy the state. We
+require users to provide such a primitive function because each local state
+should be have its own copy.
+
+IV. Examples
+
+1. The register examples.
---------- Interfae ----------
void init(atomic_int &loc, int initial);
int load(atomic_int &loc);
---------- Interfae ----------
a. The SC atomics --- the classic linearizability approach
-b. The ordered-store acq/rel atomics
-We basically allow a load operation to read form any store that this does not happen
-before
+b. The RA (release/acquire) C/C++ atomics
+// For RA atomics, a load must read its value from a store that happens before
+// it.
---------- Specification ----------
-// Store to the same location should conflict
-@Commutativity: Store <-> Store(Store.loc != Store.loc)
-@Commutativity: Store <-> Load
-@Commutativity: Load <-> Load
-
@DeclareVar: int x;
@InitVar: x = 0;
@Interface: Store
@SideEffect:
- x = val;
+ LOCAL.x = val;
+void store(int *loc, int val);
@Interface: Load
+@PreCondition:
+ IntegerSet *prevSet = new IntegerSet;
+ for (m in PREV) {
+ prevSet->add(m.LOCAL.x);
+ }
+ return prevSet->contains(RET);
@SideEffect:
-@PostCondition:
- __RET__ == x ||
- {Last.Store(_M.loc == loc)}
+ LOCAL.x = RET;
+int load(int *loc);
+c. The C/C++ atomics (a slightly loose specification)
+// Actually, any concurrent data structures that rely modification-order to be
+// correct would not have a precicely tight specification under our model, and
+// C/C++ relaxed atomics is an example. See the following read-read coherence
+// example.
+// T1 // T2
+x = 1; x = 2;
+
+// T3
+r1 = x; // r1 == 1
+r2 = x; // r2 == 2
+r3 = x; // r3 == 1
+
+Our model cannot prevent such a case from happening. However, we can still have
+a slightly loose specification which basically states that a load can read from
+any store that either immediately happens before it or concurrently executes.
+
+---------- Specification ----------
+@DeclareVar: int x;
+@InitVar: x = 0;
-c. The C/C++ atomics
+@Interface: Store
+@SideEffect:
+ LOCAL.x = val;
+void store(int *loc, int val);
+@Interface: Load
+@PreCondition:
+ IntegerSet *prevSet = new IntegerSet;
+ IntegerSet *concurSet = new IntegerSet;
+ for (m in PREV) {
+ prevSet->add(m.LOCAL.x);
+ }
+ for (m in CONCURRENT("STORE")) {
+ concurSet->add(m.val);
+ }
+ return prevSet->contains(RET) || concurSet->contains(RET);
+@SideEffect:
+ LOCAL.x = RET;
+int load(int *loc);
d. The C/C++ normal memory accesses
- Use the admissibility requirement, then the classic linearizability approach
on the admissible executions
-
A good example to simulate a queue data structure is as follows.
Suppose we have a special structure
typedef struct Q {
x and y fields in turn.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
----------------------------------------------------------------------------------------
We also need to think about the non-ordered queue.