There are no constants of type x86mmx.
@@ -2111,14 +2104,6 @@ Classifications
the number and types of elements must match those specified by the
type.
- Array constants are represented with notation similar to array type
definitions (a comma separated list of elements, surrounded by square
@@ -2167,13 +2152,11 @@ Classifications
have pointer type. For example, the following is a
legal LLVM file:
-
-
+
@X = global i32 17
@Y = global i32 42
@Z = global [2 x i32*] [ i32* @X, i32* @Y ]
-
@@ -2183,8 +2166,8 @@ Classifications
The string 'undef' can be used anywhere a constant is expected, and
indicates that the user of the value may receive an unspecified bit-pattern.
- Undefined values may be of any type (other than label or void) and be used
- anywhere a constant is permitted.
+ Undefined values may be of any type (other than 'label'
+ or 'void') and be used anywhere a constant is permitted.
Undefined values are useful because they indicate to the compiler that the
program is well defined no matter what value is used. This gives the
@@ -2192,8 +2175,7 @@ Classifications
surprising) transformations that are valid (in pseudo IR):
-
-
+
%A = add %X, undef
%B = sub %X, undef
%C = xor %X, undef
@@ -2202,13 +2184,11 @@ Safe:
%B = undef
%C = undef
-
This is safe because all of the output bits are affected by the undef bits.
-Any output bit can have a zero or one depending on the input bits.
+ Any output bit can have a zero or one depending on the input bits.
-
-
+
%A = or %X, undef
%B = and %X, undef
Safe:
@@ -2218,19 +2198,18 @@ Unsafe:
%A = undef
%B = undef
-
These logical operations have bits that are not always affected by the input.
-For example, if "%X" has a zero bit, then the output of the 'and' operation will
-always be a zero, no matter what the corresponding bit from the undef is. As
-such, it is unsafe to optimize or assume that the result of the and is undef.
-However, it is safe to assume that all bits of the undef could be 0, and
-optimize the and to 0. Likewise, it is safe to assume that all the bits of
-the undef operand to the or could be set, allowing the or to be folded to
--1.
-
-
-
+ For example, if %X has a zero bit, then the output of the
+ 'and' operation will always be a zero for that bit, no matter what
+ the corresponding bit from the 'undef' is. As such, it is unsafe to
+ optimize or assume that the result of the 'and' is 'undef'.
+ However, it is safe to assume that all bits of the 'undef' could be
+ 0, and optimize the 'and' to 0. Likewise, it is safe to assume that
+ all the bits of the 'undef' operand to the 'or' could be
+ set, allowing the 'or' to be folded to -1.
+
+
%A = select undef, %X, %Y
%B = select undef, 42, %Y
%C = select %X, %Y, undef
@@ -2243,18 +2222,17 @@ Unsafe:
%B = undef
%C = undef
-
-
-This set of examples show that undefined select (and conditional branch)
-conditions can go "either way" but they have to come from one of the two
-operands. In the %A example, if %X and %Y were both known to have a clear low
-bit, then %A would have to have a cleared low bit. However, in the %C example,
-the optimizer is allowed to assume that the undef operand could be the same as
-%Y, allowing the whole select to be eliminated.
+This set of examples shows that undefined 'select' (and conditional
+ branch) conditions can go either way, but they have to come from one
+ of the two operands. In the %A example, if %X and
+ %Y were both known to have a clear low bit, then %A would
+ have to have a cleared low bit. However, in the %C example, the
+ optimizer is allowed to assume that the 'undef' operand could be the
+ same as %Y, allowing the whole 'select' to be
+ eliminated.
-
-
+
%A = xor undef, undef
%B = undef
@@ -2272,57 +2250,53 @@ Safe:
%E = undef
%F = undef
-
-This example points out that two undef operands are not necessarily the same.
-This can be surprising to people (and also matches C semantics) where they
-assume that "X^X" is always zero, even if X is undef. This isn't true for a
-number of reasons, but the short answer is that an undef "variable" can
-arbitrarily change its value over its "live range". This is true because the
-"variable" doesn't actually have a live range. Instead, the value is
-logically read from arbitrary registers that happen to be around when needed,
-so the value is not necessarily consistent over time. In fact, %A and %C need
-to have the same semantics or the core LLVM "replace all uses with" concept
-would not hold.
+This example points out that two 'undef' operands are not
+ necessarily the same. This can be surprising to people (and also matches C
+ semantics) where they assume that "X^X" is always zero, even
+ if X is undefined. This isn't true for a number of reasons, but the
+ short answer is that an 'undef' "variable" can arbitrarily change
+ its value over its "live range". This is true because the variable doesn't
+ actually have a live range. Instead, the value is logically read
+ from arbitrary registers that happen to be around when needed, so the value
+ is not necessarily consistent over time. In fact, %A and %C
+ need to have the same semantics or the core LLVM "replace all uses with"
+ concept would not hold.
-
-
+
%A = fdiv undef, %X
%B = fdiv %X, undef
Safe:
%A = undef
b: unreachable
-
These examples show the crucial difference between an undefined
-value and undefined behavior. An undefined value (like undef) is
-allowed to have an arbitrary bit-pattern. This means that the %A operation
-can be constant folded to undef because the undef could be an SNaN, and fdiv is
-not (currently) defined on SNaN's. However, in the second example, we can make
-a more aggressive assumption: because the undef is allowed to be an arbitrary
-value, we are allowed to assume that it could be zero. Since a divide by zero
-has undefined behavior, we are allowed to assume that the operation
-does not execute at all. This allows us to delete the divide and all code after
-it: since the undefined operation "can't happen", the optimizer can assume that
-it occurs in dead code.
-
-
-
-
+ value and undefined behavior. An undefined value (like
+ 'undef') is allowed to have an arbitrary bit-pattern. This means that
+ the %A operation can be constant folded to 'undef', because
+ the 'undef' could be an SNaN, and fdiv is not (currently)
+ defined on SNaN's. However, in the second example, we can make a more
+ aggressive assumption: because the undef is allowed to be an
+ arbitrary value, we are allowed to assume that it could be zero. Since a
+ divide by zero has undefined behavior, we are allowed to assume that
+ the operation does not execute at all. This allows us to delete the divide and
+ all code after it. Because the undefined operation "can't happen", the
+ optimizer can assume that it occurs in dead code.
+
+
a: store undef -> %X
b: store %X -> undef
Safe:
a: <deleted>
b: unreachable
-
-These examples reiterate the fdiv example: a store "of" an undefined value
-can be assumed to not have any effect: we can assume that the value is
-overwritten with bits that happen to match what was already there. However, a
-store "to" an undefined location could clobber arbitrary memory, therefore, it
-has undefined behavior.
+These examples reiterate the fdiv example: a store of an
+ undefined value can be assumed to not have any effect; we can assume that the
+ value is overwritten with bits that happen to match what was already there.
+ However, a store to an undefined location could clobber arbitrary
+ memory, therefore, it has undefined behavior.
@@ -2342,7 +2316,6 @@ has undefined behavior.
Trap value behavior is defined in terms of value dependence:
-
- Values other than phi nodes depend on
their operands.
@@ -2374,7 +2347,8 @@ has undefined behavior.
- An instruction with externally visible side effects depends on the most
recent preceding instruction with externally visible side effects, following
- the order in the IR. (This includes volatile loads and stores.)
+ the order in the IR. (This includes
+ volatile operations.)
- An instruction control-depends on a
terminator instruction
@@ -2385,7 +2359,6 @@ has undefined behavior.
- Dependence is transitive.
-
Whenever a trap value is generated, all values which depend on it evaluate
to trap. If they have side effects, the evoke their side effects as if each
@@ -2394,8 +2367,7 @@ has undefined behavior.
Here are some examples:
-
-
+
entry:
%trap = sub nuw i32 0, 1 ; Results in a trap value.
%still_trap = and i32 %trap, 0 ; Whereas (and i32 undef, 0) would return 0.
@@ -2430,7 +2402,6 @@ end:
; so this is defined (ignoring earlier
; undefined behavior in this example).
-
@@ -2446,18 +2417,17 @@ end:
the address of the entry block is illegal.
This value only has defined behavior when used as an operand to the
- 'indirectbr' instruction or for comparisons
- against null. Pointer equality tests between labels addresses is undefined
- behavior - though, again, comparison against null is ok, and no label is
- equal to the null pointer. This may also be passed around as an opaque
- pointer sized value as long as the bits are not inspected. This allows
- ptrtoint and arithmetic to be performed on these values so long as
- the original value is reconstituted before the indirectbr.
+ 'indirectbr' instruction, or for
+ comparisons against null. Pointer equality tests between labels addresses
+ results in undefined behavior — though, again, comparison against null
+ is ok, and no label is equal to the null pointer. This may be passed around
+ as an opaque pointer sized value as long as the bits are not inspected. This
+ allows ptrtoint and arithmetic to be performed on these values so
+ long as the original value is reconstituted before the indirectbr
+ instruction.
-Finally, some targets may provide defined semantics when
- using the value as the operand to an inline assembly, but that is target
- specific.
-
+Finally, some targets may provide defined semantics when using the value as
+ the operand to an inline assembly, but that is target specific.
@@ -2472,107 +2442,117 @@ end:
to be used as constants. Constant expressions may be of
any first class type and may involve any LLVM
operation that does not have side effects (e.g. load and call are not
- supported). The following is the syntax for constant expressions:
+ supported). The following is the syntax for constant expressions:
- - trunc ( CST to TYPE )
+ - trunc (CST to TYPE)
- Truncate a constant to another type. The bit size of CST must be larger
than the bit size of TYPE. Both types must be integers.
- - zext ( CST to TYPE )
+ - zext (CST to TYPE)
- Zero extend a constant to another type. The bit size of CST must be
- smaller or equal to the bit size of TYPE. Both types must be
- integers.
+ smaller than the bit size of TYPE. Both types must be integers.
- Perform the specified operation of the LHS and RHS constants. OPCODE may
be any of the binary
or bitwise binary operations. The constraints
@@ -2602,31 +2582,25 @@ end:
containing the asm needs to align its stack conservatively. An example
inline assembler expression is:
-
-
+
i32 (i32) asm "bswap $0", "=r,r"
-
Inline assembler expressions may only be used as the callee operand of
a call instruction. Thus, typically we
have:
-
-
+
%X = call i32 asm "bswap $0", "=r,r"(i32 %Y)
-
Inline asms with side effects not visible in the constraint list must be
marked as having side effects. This is done through the use of the
'sideeffect' keyword, like so:
-
-
+
call void asm sideeffect "eieio", ""()
-
In some cases inline asms will contain code that will not work unless the
stack is aligned in some way, such as calls or SSE instructions on x86,
@@ -2635,11 +2609,9 @@ call void asm sideeffect "eieio", ""()
contain and should generate its usual stack alignment code in the prologue
if the 'alignstack' keyword is present:
-
-
+
call void asm alignstack "eieio", ""()
-
If both keywords appear the 'sideeffect' keyword must come
first.
@@ -2657,22 +2629,21 @@ call void asm alignstack "eieio", ""()
The call instructions that wrap inline asm nodes may have a "!srcloc" MDNode
- attached to it that contains a constant integer. If present, the code
- generator will use the integer as the location cookie value when report
+ attached to it that contains a list of constant integers. If present, the
+ code generator will use the integer as the location cookie value when report
errors through the LLVMContext error reporting mechanisms. This allows a
front-end to correlate backend errors that occur with inline asm back to the
source code that produced it. For example:
-
-
+
call void asm sideeffect "something bad", ""(), !srcloc !42
...
!42 = !{ i32 1234567 }
-
It is up to the front-end to make sense of the magic numbers it places in the
- IR.
+ IR. If the MDNode contains multiple constants, the code generator will use
+ the one that corresponds to the line of the asm that the error occurs on.
@@ -2704,22 +2675,18 @@ call void asm sideeffect "something bad", ""(), !srcloc !42
example: "!foo = metadata !{!4, !3}".
Metadata can be used as function arguments. Here llvm.dbg.value
- function is using two metadata arguments.
+ function is using two metadata arguments.
-
-
+
call void @llvm.dbg.value(metadata !24, i64 0, metadata !25)
-
Metadata can be attached with an instruction. Here metadata !21 is
- attached with add instruction using !dbg identifier.
+ attached with add instruction using !dbg identifier.
-
-
+
%indvar.next = add i64 %indvar, 1, !dbg !21
-
@@ -3520,7 +3487,7 @@ Instruction
If the exact keyword is present, the result value of the
sdiv is a trap value if the result would
- be rounded or if overflow would occur.
+ be rounded.
Example:
@@ -4160,10 +4127,18 @@ Instruction
Arguments:
The first operand of an 'extractvalue' instruction is a value
- of struct, union or
+ of struct or
array type. The operands are constant indices to
specify which value to extract in a similar manner as indices in a
'getelementptr' instruction.
+ The major differences to getelementptr indexing are:
+
+ - Since the value being indexed is not a pointer, the first index is
+ omitted and assumed to be zero.
+ - At least one index must be specified.
+ - Not only struct indices but also array indices must be in
+ bounds.
+
Semantics:
The result is the value at the position in the aggregate specified by the
@@ -4194,11 +4169,11 @@ Instruction
Arguments:
The first operand of an 'insertvalue' instruction is a value
- of struct, union or
+ of struct or
array type. The second operand is a first-class
value to insert. The following operands are constant indices indicating
the position at which to insert the value in a similar manner as indices in a
- 'getelementptr' instruction. The
+ 'extractvalue' instruction. The
value to insert must have the same type as the value identified by the
indices.
@@ -4239,7 +4214,7 @@ Instruction
Syntax:
- <result> = alloca <type>[, i32 <NumElements>][, align <alignment>] ; yields {type*}:result
+ <result> = alloca <type>[, <ty> <NumElements>][, align <alignment>] ; yields {type*}:result
Overview:
@@ -4347,8 +4322,8 @@ Instruction
Syntax:
- store <ty> <value>, <ty>* <pointer>[, align <alignment>][, !nontemporal !] ; yields {void}
- volatile store <ty> <value>, <ty>* <pointer>[, align <alignment>][, !nontemporal !] ; yields {void}
+ store <ty> <value>, <ty>* <pointer>[, align <alignment>][, !nontemporal !<index>] ; yields {void}
+ volatile store <ty> <value>, <ty>* <pointer>[, align <alignment>][, !nontemporal !<index>] ; yields {void}
Overview:
@@ -4373,7 +4348,7 @@ Instruction
produce less efficient code. An alignment of 1 is always safe.
The optional !nontemporal metadata must reference a single metatadata
- name corresponding to a metadata node with one i32 entry of
+ name <index> corresponding to a metadata node with one i32 entry of
value 1. The existence of the !nontemporal metatadata on the
instruction tells the optimizer and code generator that this load is
not expected to be reused in the cache. The code generator may
@@ -4427,12 +4402,12 @@ Instruction
indexes a value of the type pointed to (not necessarily the value directly
pointed to, since the first index can be non-zero), etc. The first type
indexed into must be a pointer value, subsequent types can be arrays,
- vectors, structs and unions. Note that subsequent types being indexed into
+ vectors, and structs. Note that subsequent types being indexed into
can never be pointers, since that would require loading the pointer before
continuing calculation.
The type of each index argument depends on the type it is indexing into.
- When indexing into a (optionally packed) structure or union, only i32
+ When indexing into a (optionally packed) structure, only i32
integer constants are allowed. When indexing into an array, pointer
or vector, integers of any width are allowed, and they are not required to be
constant.
@@ -4440,8 +4415,7 @@ Instruction
For example, let's consider a C code fragment and how it gets compiled to
LLVM:
-
-
+
struct RT {
char A;
int B[10][20];
@@ -4457,12 +4431,10 @@ int *foo(struct ST *s) {
return &s[1].Z.B[5][13];
}
-
The LLVM code generated by the GCC frontend is:
-
-
+
%RT = type { i8 , [10 x [20 x i32]], i8 }
%ST = type { i32, double, %RT }
@@ -4472,7 +4444,6 @@ entry:
ret i32* %reg
}
-
Semantics:
In the example above, the first index is indexing into the '%ST*'
@@ -5406,7 +5377,7 @@ Loop: ; Infinite loop that counts from 0 on up...
Example:
%retval = call i32 @test(i32 %argc)
- call i32 (i8 *, ...)* @printf(i8 * %msg, i32 12, i8 42) ; yields i32
+ call i32 (i8*, ...)* @printf(i8* %msg, i32 12, i8 42) ; yields i32
%X = tail call i32 @foo() ; yields i32
%Y = tail call fastcc i32 @foo() ; yields i32
call void %foo(i8 97 signext)
@@ -5543,8 +5514,7 @@ freestanding environments and non-C-based languages.
instruction and the variable argument handling intrinsic functions are
used.
-
-
+
define i32 @test(i32 %X, ...) {
; Initialize variable argument processing
%ap = alloca i8*
@@ -5569,7 +5539,6 @@ declare void @llvm.va_start(i8*)
declare void @llvm.va_copy(i8*, i8*)
declare void @llvm.va_end(i8*)
-
@@ -5839,7 +5808,7 @@ LLVM.
Syntax:
- declare i8 *@llvm.frameaddress(i32 <level>)
+ declare i8* @llvm.frameaddress(i32 <level>)
Overview:
@@ -5873,7 +5842,7 @@ LLVM.
Syntax:
- declare i8 *@llvm.stacksave()
+ declare i8* @llvm.stacksave()
Overview:
@@ -5903,7 +5872,7 @@ LLVM.
Syntax:
- declare void @llvm.stackrestore(i8 * %ptr)
+ declare void @llvm.stackrestore(i8* %ptr)
Overview:
@@ -5992,7 +5961,7 @@ LLVM.
Syntax:
- declare i64 @llvm.readcyclecounter( )
+ declare i64 @llvm.readcyclecounter()
Overview:
@@ -6037,9 +6006,9 @@ LLVM.
all bit widths however.
- declare void @llvm.memcpy.p0i8.p0i8.i32(i8 * <dest>, i8 * <src>,
+ declare void @llvm.memcpy.p0i8.p0i8.i32(i8* <dest>, i8* <src>,
i32 <len>, i32 <align>, i1 <isvolatile>)
- declare void @llvm.memcpy.p0i8.p0i8.i64(i8 * <dest>, i8 * <src>,
+ declare void @llvm.memcpy.p0i8.p0i8.i64(i8* <dest>, i8* <src>,
i64 <len>, i32 <align>, i1 <isvolatile>)
@@ -6091,9 +6060,9 @@ LLVM.
widths however.
- declare void @llvm.memmove.p0i8.p0i8.i32(i8 * <dest>, i8 * <src>,
+ declare void @llvm.memmove.p0i8.p0i8.i32(i8* <dest>, i8* <src>,
i32 <len>, i32 <align>, i1 <isvolatile>)
- declare void @llvm.memmove.p0i8.p0i8.i64(i8 * <dest>, i8 * <src>,
+ declare void @llvm.memmove.p0i8.p0i8.i64(i8* <dest>, i8* <src>,
i64 <len>, i32 <align>, i1 <isvolatile>)
@@ -6143,13 +6112,13 @@ LLVM.
Syntax:
This is an overloaded intrinsic. You can use llvm.memset on any integer bit
- width and for different address spaces. Not all targets support all bit
- widths however.
+ width and for different address spaces. However, not all targets support all
+ bit widths.
- declare void @llvm.memset.p0i8.i32(i8 * <dest>, i8 <val>,
+ declare void @llvm.memset.p0i8.i32(i8* <dest>, i8 <val>,
i32 <len>, i32 <align>, i1 <isvolatile>)
- declare void @llvm.memset.p0i8.i64(i8 * <dest>, i8 <val>,
+ declare void @llvm.memset.p0i8.i64(i8* <dest>, i8 <val>,
i64 <len>, i32 <align>, i1 <isvolatile>)
@@ -6158,14 +6127,14 @@ LLVM.
particular byte value.
Note that, unlike the standard libc function, the llvm.memset
- intrinsic does not return a value, takes extra alignment/volatile arguments,
- and the destination can be in an arbitrary address space.
+ intrinsic does not return a value and takes extra alignment/volatile
+ arguments. Also, the destination can be in an arbitrary address space.
Arguments:
The first argument is a pointer to the destination to fill, the second is the
- byte value to fill it with, the third argument is an integer argument
+ byte value with which to fill it, the third argument is an integer argument
specifying the number of bytes to fill, and the fourth argument is the known
- alignment of destination location.
+ alignment of the destination location.
If the call to this intrinsic has an alignment value that is not 0 or 1,
then the caller guarantees that the destination pointer is aligned to that
@@ -6922,7 +6891,8 @@ LLVM.
This intrinsic makes it possible to excise one parameter, marked with
- the nest attribute, from a function. The result is a callable
+ the nest attribute, from a function.
+ The result is a callable
function pointer lacking the nest parameter - the caller does not need to
provide a value for it. Instead, the value to use is stored in advance in a
"trampoline", a block of memory usually allocated on the stack, which also
@@ -6934,17 +6904,15 @@ LLVM.
pointer has signature
i32 (i32, i32)*. It can be created as
follows:
-
-
+
%tramp = alloca [10 x i8], align 4 ; size and alignment only correct for X86
%tramp1 = getelementptr [10 x i8]* %tramp, i32 0, i32 0
- %p = call i8* @llvm.init.trampoline( i8* %tramp1, i8* bitcast (i32 (i8* nest , i32, i32)* @f to i8*), i8* %nval )
+ %p = call i8* @llvm.init.trampoline(i8* %tramp1, i8* bitcast (i32 (i8* nest , i32, i32)* @f to i8*), i8* %nval)
%fp = bitcast i8* %p to i32 (i32, i32)*
-
-
The call %val = call i32 %fp( i32 %x, i32 %y ) is then equivalent
- to %val = call i32 %f( i8* %nval, i32 %x, i32 %y ).
+
The call %val = call i32 %fp(i32 %x, i32 %y) is then equivalent
+ to %val = call i32 %f(i8* %nval, i32 %x, i32 %y).
@@ -7024,7 +6992,7 @@ LLVM.
Syntax:
- declare void @llvm.memory.barrier( i1 <ll>, i1 <ls>, i1 <sl>, i1 <ss>, i1 <device> )
+ declare void @llvm.memory.barrier(i1 <ll>, i1 <ls>, i1 <sl>, i1 <ss>, i1 <device>)
Overview:
@@ -7081,7 +7049,7 @@ LLVM.
store i32 4, %ptr
%result1 = load i32* %ptr
; yields {i32}:result1 = 4
- call void @llvm.memory.barrier( i1 false, i1 true, i1 false, i1 false )
+ call void @llvm.memory.barrier(i1 false, i1 true, i1 false, i1 false)
; guarantee the above finishes
store i32 8, %ptr
; before this begins
@@ -7101,10 +7069,10 @@ LLVM.
support all bit widths however.
- declare i8 @llvm.atomic.cmp.swap.i8.p0i8( i8* <ptr>, i8 <cmp>, i8 <val> )
- declare i16 @llvm.atomic.cmp.swap.i16.p0i16( i16* <ptr>, i16 <cmp>, i16 <val> )
- declare i32 @llvm.atomic.cmp.swap.i32.p0i32( i32* <ptr>, i32 <cmp>, i32 <val> )
- declare i64 @llvm.atomic.cmp.swap.i64.p0i64( i64* <ptr>, i64 <cmp>, i64 <val> )
+ declare i8 @llvm.atomic.cmp.swap.i8.p0i8(i8* <ptr>, i8 <cmp>, i8 <val>)
+ declare i16 @llvm.atomic.cmp.swap.i16.p0i16(i16* <ptr>, i16 <cmp>, i16 <val>)
+ declare i32 @llvm.atomic.cmp.swap.i32.p0i32(i32* <ptr>, i32 <cmp>, i32 <val>)
+ declare i64 @llvm.atomic.cmp.swap.i64.p0i64(i64* <ptr>, i64 <cmp>, i64 <val>)
Overview:
@@ -7133,13 +7101,13 @@ LLVM.
store i32 4, %ptr
%val1 = add i32 4, 4
-%result1 = call i32 @llvm.atomic.cmp.swap.i32.p0i32( i32* %ptr, i32 4, %val1 )
+%result1 = call i32 @llvm.atomic.cmp.swap.i32.p0i32(i32* %ptr, i32 4, %val1)
; yields {i32}:result1 = 4
%stored1 = icmp eq i32 %result1, 4
; yields {i1}:stored1 = true
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 8
%val2 = add i32 1, 1
-%result2 = call i32 @llvm.atomic.cmp.swap.i32.p0i32( i32* %ptr, i32 5, %val2 )
+%result2 = call i32 @llvm.atomic.cmp.swap.i32.p0i32(i32* %ptr, i32 5, %val2)
; yields {i32}:result2 = 8
%stored2 = icmp eq i32 %result2, 5
; yields {i1}:stored2 = false
@@ -7159,10 +7127,10 @@ LLVM.
integer bit width. Not all targets support all bit widths however.
- declare i8 @llvm.atomic.swap.i8.p0i8( i8* <ptr>, i8 <val> )
- declare i16 @llvm.atomic.swap.i16.p0i16( i16* <ptr>, i16 <val> )
- declare i32 @llvm.atomic.swap.i32.p0i32( i32* <ptr>, i32 <val> )
- declare i64 @llvm.atomic.swap.i64.p0i64( i64* <ptr>, i64 <val> )
+ declare i8 @llvm.atomic.swap.i8.p0i8(i8* <ptr>, i8 <val>)
+ declare i16 @llvm.atomic.swap.i16.p0i16(i16* <ptr>, i16 <val>)
+ declare i32 @llvm.atomic.swap.i32.p0i32(i32* <ptr>, i32 <val>)
+ declare i64 @llvm.atomic.swap.i64.p0i64(i64* <ptr>, i64 <val>)
Overview:
@@ -7189,13 +7157,13 @@ LLVM.
store i32 4, %ptr
%val1 = add i32 4, 4
-%result1 = call i32 @llvm.atomic.swap.i32.p0i32( i32* %ptr, i32 %val1 )
+%result1 = call i32 @llvm.atomic.swap.i32.p0i32(i32* %ptr, i32 %val1)
; yields {i32}:result1 = 4
%stored1 = icmp eq i32 %result1, 4
; yields {i1}:stored1 = true
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 8
%val2 = add i32 1, 1
-%result2 = call i32 @llvm.atomic.swap.i32.p0i32( i32* %ptr, i32 %val2 )
+%result2 = call i32 @llvm.atomic.swap.i32.p0i32(i32* %ptr, i32 %val2)
; yields {i32}:result2 = 8
%stored2 = icmp eq i32 %result2, 8
; yields {i1}:stored2 = true
@@ -7217,10 +7185,10 @@ LLVM.
any integer bit width. Not all targets support all bit widths however.
- declare i8 @llvm.atomic.load.add.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.add.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.add.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.add.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.add.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.add.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.add.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.add.i64.p0i64(i64* <ptr>, i64 <delta>)
Overview:
@@ -7243,11 +7211,11 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 4, %ptr
-%result1 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 4 )
+%result1 = call i32 @llvm.atomic.load.add.i32.p0i32(i32* %ptr, i32 4)
; yields {i32}:result1 = 4
-%result2 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 2 )
+%result2 = call i32 @llvm.atomic.load.add.i32.p0i32(i32* %ptr, i32 2)
; yields {i32}:result2 = 8
-%result3 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 5 )
+%result3 = call i32 @llvm.atomic.load.add.i32.p0i32(i32* %ptr, i32 5)
; yields {i32}:result3 = 10
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 15
@@ -7268,10 +7236,10 @@ LLVM.
support all bit widths however.
- declare i8 @llvm.atomic.load.sub.i8.p0i32( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.sub.i16.p0i32( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.sub.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.sub.i64.p0i32( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.sub.i8.p0i32(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.sub.i16.p0i32(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.sub.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.sub.i64.p0i32(i64* <ptr>, i64 <delta>)
Overview:
@@ -7295,11 +7263,11 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 8, %ptr
-%result1 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 4 )
+%result1 = call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %ptr, i32 4)
; yields {i32}:result1 = 8
-%result2 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 2 )
+%result2 = call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %ptr, i32 2)
; yields {i32}:result2 = 4
-%result3 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 5 )
+%result3 = call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %ptr, i32 5)
; yields {i32}:result3 = 2
%memval1 = load i32* %ptr
; yields {i32}:memval1 = -3
@@ -7324,31 +7292,31 @@ LLVM.
widths however.
- declare i8 @llvm.atomic.load.and.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.and.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.and.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.and.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.and.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.and.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.and.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.and.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.or.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.or.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.or.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.or.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.or.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.or.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.or.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.or.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.nand.i8.p0i32( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.nand.i16.p0i32( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.nand.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.nand.i64.p0i32( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.nand.i8.p0i32(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.nand.i16.p0i32(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.nand.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.nand.i64.p0i32(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.xor.i8.p0i32( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.xor.i16.p0i32( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.xor.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.xor.i64.p0i32( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.xor.i8.p0i32(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.xor.i16.p0i32(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.xor.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.xor.i64.p0i32(i64* <ptr>, i64 <delta>)
Overview:
@@ -7373,13 +7341,13 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 0x0F0F, %ptr
-%result0 = call i32 @llvm.atomic.load.nand.i32.p0i32( i32* %ptr, i32 0xFF )
+%result0 = call i32 @llvm.atomic.load.nand.i32.p0i32(i32* %ptr, i32 0xFF)
; yields {i32}:result0 = 0x0F0F
-%result1 = call i32 @llvm.atomic.load.and.i32.p0i32( i32* %ptr, i32 0xFF )
+%result1 = call i32 @llvm.atomic.load.and.i32.p0i32(i32* %ptr, i32 0xFF)
; yields {i32}:result1 = 0xFFFFFFF0
-%result2 = call i32 @llvm.atomic.load.or.i32.p0i32( i32* %ptr, i32 0F )
+%result2 = call i32 @llvm.atomic.load.or.i32.p0i32(i32* %ptr, i32 0F)
; yields {i32}:result2 = 0xF0
-%result3 = call i32 @llvm.atomic.load.xor.i32.p0i32( i32* %ptr, i32 0F )
+%result3 = call i32 @llvm.atomic.load.xor.i32.p0i32(i32* %ptr, i32 0F)
; yields {i32}:result3 = FF
%memval1 = load i32* %ptr
; yields {i32}:memval1 = F0
@@ -7403,31 +7371,31 @@ LLVM.
address spaces. Not all targets support all bit widths however.
- declare i8 @llvm.atomic.load.max.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.max.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.max.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.max.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.max.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.max.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.max.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.max.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.min.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.min.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.min.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.min.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.min.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.min.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.min.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.min.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.umax.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.umax.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.umax.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.umax.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.umax.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.umax.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.umax.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.umax.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.umin.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.umin.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.umin.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.umin.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.umin.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.umin.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.umin.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.umin.i64.p0i64(i64* <ptr>, i64 <delta>)
Overview:
@@ -7452,13 +7420,13 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 7, %ptr
-%result0 = call i32 @llvm.atomic.load.min.i32.p0i32( i32* %ptr, i32 -2 )
+%result0 = call i32 @llvm.atomic.load.min.i32.p0i32(i32* %ptr, i32 -2)
; yields {i32}:result0 = 7
-%result1 = call i32 @llvm.atomic.load.max.i32.p0i32( i32* %ptr, i32 8 )
+%result1 = call i32 @llvm.atomic.load.max.i32.p0i32(i32* %ptr, i32 8)
; yields {i32}:result1 = -2
-%result2 = call i32 @llvm.atomic.load.umin.i32.p0i32( i32* %ptr, i32 10 )
+%result2 = call i32 @llvm.atomic.load.umin.i32.p0i32(i32* %ptr, i32 10)
; yields {i32}:result2 = 8
-%result3 = call i32 @llvm.atomic.load.umax.i32.p0i32( i32* %ptr, i32 30 )
+%result3 = call i32 @llvm.atomic.load.umax.i32.p0i32(i32* %ptr, i32 30)
; yields {i32}:result3 = 8
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 30
@@ -7546,7 +7514,7 @@ LLVM.
Syntax:
- declare {}* @llvm.invariant.start(i64 <size>, i8* nocapture <ptr>) readonly
+ declare {}* @llvm.invariant.start(i64 <size>, i8* nocapture <ptr>)
Overview:
@@ -7613,7 +7581,7 @@ LLVM.
Syntax:
- declare void @llvm.var.annotation(i8* <val>, i8* <str>, i8* <str>, i32 <int> )
+ declare void @llvm.var.annotation(i8* <val>, i8* <str>, i8* <str>, i32 <int>)
Overview:
@@ -7644,11 +7612,11 @@ LLVM.
any integer bit width.
- declare i8 @llvm.annotation.i8(i8 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i16 @llvm.annotation.i16(i16 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i32 @llvm.annotation.i32(i32 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i64 @llvm.annotation.i64(i64 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i256 @llvm.annotation.i256(i256 <val>, i8* <str>, i8* <str>, i32 <int> )
+ declare i8 @llvm.annotation.i8(i8 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i16 @llvm.annotation.i16(i16 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i32 @llvm.annotation.i32(i32 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i64 @llvm.annotation.i64(i64 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i256 @llvm.annotation.i256(i256 <val>, i8* <str>, i8* <str>, i32 <int>)
Overview:
@@ -7702,7 +7670,7 @@ LLVM.
Syntax:
- declare void @llvm.stackprotector( i8* <guard>, i8** <slot> )
+ declare void @llvm.stackprotector(i8* <guard>, i8** <slot>)
Overview:
@@ -7721,7 +7689,7 @@ LLVM.
the
AllocaInst stack slot to be before local variables on the
stack. This is to ensure that if a local variable on the stack is
overwritten, it will destroy the value of the guard. When the function exits,
- the guard on the stack is checked against the original guard. If they're
+ the guard on the stack is checked against the original guard. If they are
different, then the program aborts by calling the
__stack_chk_fail()
function.
@@ -7736,30 +7704,29 @@ LLVM.
Syntax:
- declare i32 @llvm.objectsize.i32( i8* <object>, i1 <type> )
- declare i64 @llvm.objectsize.i64( i8* <object>, i1 <type> )
+ declare i32 @llvm.objectsize.i32(i8* <object>, i1 <type>)
+ declare i64 @llvm.objectsize.i64(i8* <object>, i1 <type>)
Overview:
-
The llvm.objectsize intrinsic is designed to provide information
- to the optimizers to discover at compile time either a) when an
- operation like memcpy will either overflow a buffer that corresponds to
- an object, or b) to determine that a runtime check for overflow isn't
- necessary. An object in this context means an allocation of a
- specific class, structure, array, or other object.
+
The llvm.objectsize intrinsic is designed to provide information to
+ the optimizers to determine at compile time whether a) an operation (like
+ memcpy) will overflow a buffer that corresponds to an object, or b) that a
+ runtime check for overflow isn't necessary. An object in this context means
+ an allocation of a specific class, structure, array, or other object.
Arguments:
-
The llvm.objectsize intrinsic takes two arguments. The first
+
The llvm.objectsize intrinsic takes two arguments. The first
argument is a pointer to or into the object. The second argument
- is a boolean 0 or 1. This argument determines whether you want the
- maximum (0) or minimum (1) bytes remaining. This needs to be a literal 0 or
+ is a boolean 0 or 1. This argument determines whether you want the
+ maximum (0) or minimum (1) bytes remaining. This needs to be a literal 0 or
1, variables are not allowed.
Semantics:
The llvm.objectsize intrinsic is lowered to either a constant
- representing the size of the object concerned or i32/i64 -1 or 0
- (depending on the type argument if the size cannot be determined
- at compile time.
+ representing the size of the object concerned, or
i32/i64 -1 or 0,
+ depending on the
type argument, if the size cannot be determined at
+ compile time.