X-Git-Url: http://demsky.eecs.uci.edu/git/?a=blobdiff_plain;f=docs%2FAtomics.html;h=2358f4d2ef2268ccfa1ffb456d7b6be699c68cdf;hb=3429c7571e87ca6070ceb1b44b1f367ce23c99f9;hp=357f43167bf6aefc1c8233152f16bfb35ed214a2;hpb=91a44dd9ccd8ec3a10fa35315c381cffade91d5b;p=oota-llvm.git diff --git a/docs/Atomics.html b/docs/Atomics.html index 357f43167bf..2358f4d2ef2 100644 --- a/docs/Atomics.html +++ b/docs/Atomics.html @@ -4,7 +4,7 @@ LLVM Atomic Instructions and Concurrency Guide - + @@ -121,9 +121,10 @@ void f(int* a) {

However, LLVM is not allowed to transform the former to the latter: it could - introduce undefined behavior if another thread can access x at the same time. - (This example is particularly of interest because before the concurrency model - was implemented, LLVM would perform this transformation.)

+ indirectly introduce undefined behavior if another thread can access x at + the same time. (This example is particularly of interest because before the + concurrency model was implemented, LLVM would perform this + transformation.)

Note that speculative loads are allowed; a load which is part of a race returns undef, but does not have undefined @@ -177,7 +178,7 @@ void f(int* a) {

In order to achieve a balance between performance and necessary guarantees, there are six levels of atomicity. They are listed in order of strength; each level includes all the guarantees of the previous level except for - Acquire/Release.

+ Acquire/Release. (See also LangRef.)

@@ -188,15 +189,15 @@ void f(int* a) {

NotAtomic is the obvious, a load or store which is not atomic. (This isn't really a level of atomicity, but is listed here for comparison.) This is - essentially a regular load or store. If code accesses a memory location - from multiple threads at the same time, the resulting loads return - 'undef'.

+ essentially a regular load or store. If there is a race on a given memory + location, loads from that location return undef.

Relevant standard
This is intended to match shared variables in C/C++, and to be used in any other context where memory access is necessary, and - a race is impossible. + a race is impossible. (The precise definition is in + LangRef.)
Notes for frontends
The rule is essentially that all memory accessed with basic loads and stores by multiple threads should be protected by a lock or other @@ -307,7 +308,7 @@ void f(int* a) { which would make those optimizations useful.
Notes for code generation
Code generation is essentially the same as that for unordered for loads - and stores. No fences is required. cmpxchg and + and stores. No fences are required. cmpxchg and atomicrmw are required to appear as a single operation.
@@ -436,10 +437,10 @@ void f(int* a) { SequentiallyConsistent operations may not be reordered.
Notes for code generation
SequentiallyConsistent loads minimally require the same barriers - as Acquire operations and SequeuentiallyConsistent stores require + as Acquire operations and SequentiallyConsistent stores require Release barriers. Additionally, the code generator must enforce - ordering between SequeuentiallyConsistent stores followed by - SequeuentiallyConsistent loads. This is usually done by emitting + ordering between SequentiallyConsistent stores followed by + SequentiallyConsistent loads. This is usually done by emitting either a full fence before the loads or a full fence after the stores; which is preferred varies by architecture.