There are no constants of type x86mmx.
@@ -2087,14 +2100,6 @@ Classifications
the number and types of elements must match those specified by the
type.
- Perform the specified operation of the LHS and RHS constants. OPCODE may
be any of the binary
or bitwise binary operations. The constraints
@@ -2470,31 +2578,25 @@ has undefined behavior.
containing the asm needs to align its stack conservatively. An example
inline assembler expression is:
-
-
+
i32 (i32) asm "bswap $0", "=r,r"
-
Inline assembler expressions may only be used as the callee operand of
a call instruction. Thus, typically we
have:
-
-
+
%X = call i32 asm "bswap $0", "=r,r"(i32 %Y)
-
Inline asms with side effects not visible in the constraint list must be
marked as having side effects. This is done through the use of the
'sideeffect' keyword, like so:
-
-
+
call void asm sideeffect "eieio", ""()
-
In some cases inline asms will contain code that will not work unless the
stack is aligned in some way, such as calls or SSE instructions on x86,
@@ -2503,11 +2605,9 @@ call void asm sideeffect "eieio", ""()
contain and should generate its usual stack alignment code in the prologue
if the 'alignstack' keyword is present:
-
-
+
call void asm alignstack "eieio", ""()
-
If both keywords appear the 'sideeffect' keyword must come
first.
@@ -2528,16 +2628,14 @@ call void asm alignstack "eieio", ""()
attached to it that contains a constant integer. If present, the code
generator will use the integer as the location cookie value when report
errors through the LLVMContext error reporting mechanisms. This allows a
- front-end to corrolate backend errors that occur with inline asm back to the
+ front-end to correlate backend errors that occur with inline asm back to the
source code that produced it. For example:
-
-
+
call void asm sideeffect "something bad", ""(), !srcloc !42
...
!42 = !{ i32 1234567 }
-
It is up to the front-end to make sense of the magic numbers it places in the
IR.
@@ -2572,22 +2670,18 @@ call void asm sideeffect "something bad", ""(), !srcloc !42
example: "!foo = metadata !{!4, !3}".
Metadata can be used as function arguments. Here llvm.dbg.value
- function is using two metadata arguments.
+ function is using two metadata arguments.
-
-
+
call void @llvm.dbg.value(metadata !24, i64 0, metadata !25)
-
Metadata can be attached with an instruction. Here metadata !21 is
- attached with add instruction using !dbg identifier.
+ attached with add instruction using !dbg identifier.
-
-
+
%indvar.next = add i64 %indvar, 1, !dbg !21
-
@@ -2662,8 +2756,12 @@ should not be exposed to source languages.
-
-
TODO: Describe this.
+
+%0 = type { i32, void ()* }
+@llvm.global_ctors = appending global [1 x %0] [%0 { i32 65535, void ()* @ctor }]
+
+
The @llvm.global_ctors array contains a list of constructor functions and associated priorities. The functions referenced by this array will be called in ascending order of priority (i.e. lowest first) when the module is loaded. The order of functions with the same priority is not defined.
+
@@ -2673,8 +2771,13 @@ should not be exposed to source languages.
+
+%0 = type { i32, void ()* }
+@llvm.global_dtors = appending global [1 x %0] [%0 { i32 65535, void ()* @dtor }]
+
-
TODO: Describe this.
+
The @llvm.global_dtors array contains a list of destructor functions and associated priorities. The functions referenced by this array will be called in descending order of priority (i.e. highest first) when the module is loaded. The order of functions with the same priority is not defined.
+
@@ -2707,7 +2810,7 @@ Instructions
control flow, not values (the one exception being the
'invoke' instruction).
-There are six different terminator instructions: the
+
There are seven different terminator instructions: the
'ret' instruction, the
'br' instruction, the
'switch' instruction, the
@@ -3104,7 +3207,8 @@ Instruction
nuw and nsw stand for "No Unsigned Wrap"
and "No Signed Wrap", respectively. If the nuw and/or
nsw keywords are present, the result value of the add
- is undefined if unsigned and/or signed overflow, respectively, occurs.
+ is a trap value if unsigned and/or signed overflow,
+ respectively, occurs.
Example:
@@ -3184,7 +3288,8 @@ Instruction
nuw and nsw stand for "No Unsigned Wrap"
and "No Signed Wrap", respectively. If the nuw and/or
nsw keywords are present, the result value of the sub
- is undefined if unsigned and/or signed overflow, respectively, occurs.
+ is a trap value if unsigned and/or signed overflow,
+ respectively, occurs.
Example:
@@ -3270,7 +3375,8 @@ Instruction
nuw and nsw stand for "No Unsigned Wrap"
and "No Signed Wrap", respectively. If the nuw and/or
nsw keywords are present, the result value of the mul
- is undefined if unsigned and/or signed overflow, respectively, occurs.
+ is a trap value if unsigned and/or signed overflow,
+ respectively, occurs.
Example:
@@ -3375,8 +3481,8 @@ Instruction
a 32-bit division of -2147483648 by -1.
If the exact keyword is present, the result value of the
- sdiv is undefined if the result would be rounded or if overflow
- would occur.
+ sdiv is a trap value if the result would
+ be rounded.
Example:
@@ -4016,7 +4122,7 @@ Instruction
Arguments:
The first operand of an 'extractvalue' instruction is a value
- of struct, union or
+ of struct or
array type. The operands are constant indices to
specify which value to extract in a similar manner as indices in a
'getelementptr' instruction.
@@ -4050,7 +4156,7 @@ Instruction
Arguments:
The first operand of an 'insertvalue' instruction is a value
- of struct, union or
+ of struct or
array type. The second operand is a first-class
value to insert. The following operands are constant indices indicating
the position at which to insert the value in a similar manner as indices in a
@@ -4095,7 +4201,7 @@ Instruction
Syntax:
- <result> = alloca <type>[, i32 <NumElements>][, align <alignment>] ; yields {type*}:result
+ <result> = alloca <type>[, <ty> <NumElements>][, align <alignment>] ; yields {type*}:result
Overview:
@@ -4158,9 +4264,8 @@ Instruction
from which to load. The pointer must point to
a first class type. If the load is
marked as volatile, then the optimizer is not allowed to modify the
- number or order of execution of this load with other
- volatile load and store
- instructions.
+ number or order of execution of this load with other volatile operations.
The optional constant align argument specifies the alignment of the
operation (that is, the alignment of the memory address). A value of 0 or an
@@ -4204,8 +4309,8 @@ Instruction
Syntax:
- store <ty> <value>, <ty>* <pointer>[, align <alignment>][, !nontemporal !] ; yields {void}
- volatile store <ty> <value>, <ty>* <pointer>[, align <alignment>][, !nontemporal !] ; yields {void}
+ store <ty> <value>, <ty>* <pointer>[, align <alignment>][, !nontemporal !<index>] ; yields {void}
+ volatile store <ty> <value>, <ty>* <pointer>[, align <alignment>][, !nontemporal !<index>] ; yields {void}
Overview:
@@ -4216,11 +4321,10 @@ Instruction
and an address at which to store it. The type of the
'<pointer>' operand must be a pointer to
the first class type of the
- '<value>' operand. If the store is marked
- as volatile, then the optimizer is not allowed to modify the number
- or order of execution of this store with other
- volatile load and store
- instructions.
+ '<value>' operand. If the store is marked as
+ volatile, then the optimizer is not allowed to modify the number or
+ order of execution of this store with other volatile operations.
The optional constant "align" argument specifies the alignment of the
operation (that is, the alignment of the memory address). A value of 0 or an
@@ -4231,7 +4335,7 @@ Instruction
produce less efficient code. An alignment of 1 is always safe.
The optional !nontemporal metadata must reference a single metatadata
- name corresponding to a metadata node with one i32 entry of
+ name <index> corresponding to a metadata node with one i32 entry of
value 1. The existence of the !nontemporal metatadata on the
instruction tells the optimizer and code generator that this load is
not expected to be reused in the cache. The code generator may
@@ -4285,12 +4389,12 @@ Instruction
indexes a value of the type pointed to (not necessarily the value directly
pointed to, since the first index can be non-zero), etc. The first type
indexed into must be a pointer value, subsequent types can be arrays,
- vectors, structs and unions. Note that subsequent types being indexed into
+ vectors, and structs. Note that subsequent types being indexed into
can never be pointers, since that would require loading the pointer before
continuing calculation.
The type of each index argument depends on the type it is indexing into.
- When indexing into a (optionally packed) structure or union, only i32
+ When indexing into a (optionally packed) structure, only i32
integer constants are allowed. When indexing into an array, pointer
or vector, integers of any width are allowed, and they are not required to be
constant.
@@ -4298,8 +4402,7 @@ Instruction
For example, let's consider a C code fragment and how it gets compiled to
LLVM:
-
-
+
struct RT {
char A;
int B[10][20];
@@ -4315,12 +4418,10 @@ int *foo(struct ST *s) {
return &s[1].Z.B[5][13];
}
-
The LLVM code generated by the GCC frontend is:
-
-
+
%RT = type { i8 , [10 x [20 x i32]], i8 }
%ST = type { i32, double, %RT }
@@ -4330,7 +4431,6 @@ entry:
ret i32* %reg
}
-
Semantics:
In the example above, the first index is indexing into the '%ST*'
@@ -4359,13 +4459,14 @@ entry:
If the inbounds keyword is present, the result value of the
- getelementptr is undefined if the base pointer is not an
- in bounds address of an allocated object, or if any of the addresses
- that would be formed by successive addition of the offsets implied by the
- indices to the base address with infinitely precise arithmetic are not an
- in bounds address of that allocated object.
- The in bounds addresses for an allocated object are all the addresses
- that point into the object, plus the address one byte past the end.
+ getelementptr is a trap value if the
+ base pointer is not an in bounds address of an allocated object,
+ or if any of the addresses that would be formed by successive addition of
+ the offsets implied by the indices to the base address with infinitely
+ precise arithmetic are not an in bounds address of that allocated
+ object. The in bounds addresses for an allocated object are all
+ the addresses that point into the object, plus the address one byte past
+ the end.
If the inbounds keyword is not present, the offsets are added to
the base address with silently-wrapping two's complement arithmetic, and
@@ -5263,7 +5364,7 @@ Loop: ; Infinite loop that counts from 0 on up...
Example:
%retval = call i32 @test(i32 %argc)
- call i32 (i8 *, ...)* @printf(i8 * %msg, i32 12, i8 42) ; yields i32
+ call i32 (i8*, ...)* @printf(i8* %msg, i32 12, i8 42) ; yields i32
%X = tail call i32 @foo() ; yields i32
%Y = tail call fastcc i32 @foo() ; yields i32
call void %foo(i8 97 signext)
@@ -5400,8 +5501,7 @@ freestanding environments and non-C-based languages.
instruction and the variable argument handling intrinsic functions are
used.
-
-
+
define i32 @test(i32 %X, ...) {
; Initialize variable argument processing
%ap = alloca i8*
@@ -5426,7 +5526,6 @@ declare void @llvm.va_start(i8*)
declare void @llvm.va_copy(i8*, i8*)
declare void @llvm.va_end(i8*)
-
@@ -5696,7 +5795,7 @@ LLVM.
Syntax:
- declare i8 *@llvm.frameaddress(i32 <level>)
+ declare i8* @llvm.frameaddress(i32 <level>)
Overview:
@@ -5730,7 +5829,7 @@ LLVM.
Syntax:
- declare i8 *@llvm.stacksave()
+ declare i8* @llvm.stacksave()
Overview:
@@ -5760,7 +5859,7 @@ LLVM.
Syntax:
- declare void @llvm.stackrestore(i8 * %ptr)
+ declare void @llvm.stackrestore(i8* %ptr)
Overview:
@@ -5849,7 +5948,7 @@ LLVM.
Syntax:
- declare i64 @llvm.readcyclecounter( )
+ declare i64 @llvm.readcyclecounter()
Overview:
@@ -5890,17 +5989,14 @@ LLVM.
Syntax:
This is an overloaded intrinsic. You can use llvm.memcpy on any
- integer bit width. Not all targets support all bit widths however.
+ integer bit width and for different address spaces. Not all targets support
+ all bit widths however.
- declare void @llvm.memcpy.i8(i8 * <dest>, i8 * <src>,
- i8 <len>, i32 <align>)
- declare void @llvm.memcpy.i16(i8 * <dest>, i8 * <src>,
- i16 <len>, i32 <align>)
- declare void @llvm.memcpy.i32(i8 * <dest>, i8 * <src>,
- i32 <len>, i32 <align>)
- declare void @llvm.memcpy.i64(i8 * <dest>, i8 * <src>,
- i64 <len>, i32 <align>)
+ declare void @llvm.memcpy.p0i8.p0i8.i32(i8* <dest>, i8* <src>,
+ i32 <len>, i32 <align>, i1 <isvolatile>)
+ declare void @llvm.memcpy.p0i8.p0i8.i64(i8* <dest>, i8* <src>,
+ i64 <len>, i32 <align>, i1 <isvolatile>)
Overview:
@@ -5908,19 +6004,28 @@ LLVM.
source location to the destination location.
Note that, unlike the standard libc function, the llvm.memcpy.*
- intrinsics do not return a value, and takes an extra alignment argument.
+ intrinsics do not return a value, takes extra alignment/isvolatile arguments
+ and the pointers can be in specified address spaces.
Arguments:
+
The first argument is a pointer to the destination, the second is a pointer
to the source. The third argument is an integer argument specifying the
- number of bytes to copy, and the fourth argument is the alignment of the
- source and destination locations.
+ number of bytes to copy, the fourth argument is the alignment of the
+ source and destination locations, and the fifth is a boolean indicating a
+ volatile access.
If the call to this intrinsic has an alignment value that is not 0 or 1,
then the caller guarantees that both the source and destination pointers are
aligned to that boundary.
+If the isvolatile parameter is true, the
+ llvm.memcpy call is a volatile operation.
+ The detailed access behavior is not very cleanly specified and it is unwise
+ to depend on it.
+
Semantics:
+
The 'llvm.memcpy.*' intrinsics copy a block of memory from the
source location to the destination location, which are not allowed to
overlap. It copies "len" bytes of memory over. If the argument is known to
@@ -5938,17 +6043,14 @@ LLVM.
Syntax:
This is an overloaded intrinsic. You can use llvm.memmove on any integer bit
- width. Not all targets support all bit widths however.
+ width and for different address space. Not all targets support all bit
+ widths however.
- declare void @llvm.memmove.i8(i8 * <dest>, i8 * <src>,
- i8 <len>, i32 <align>)
- declare void @llvm.memmove.i16(i8 * <dest>, i8 * <src>,
- i16 <len>, i32 <align>)
- declare void @llvm.memmove.i32(i8 * <dest>, i8 * <src>,
- i32 <len>, i32 <align>)
- declare void @llvm.memmove.i64(i8 * <dest>, i8 * <src>,
- i64 <len>, i32 <align>)
+ declare void @llvm.memmove.p0i8.p0i8.i32(i8* <dest>, i8* <src>,
+ i32 <len>, i32 <align>, i1 <isvolatile>)
+ declare void @llvm.memmove.p0i8.p0i8.i64(i8* <dest>, i8* <src>,
+ i64 <len>, i32 <align>, i1 <isvolatile>)
Overview:
@@ -5958,19 +6060,28 @@ LLVM.
overlap.
Note that, unlike the standard libc function, the llvm.memmove.*
- intrinsics do not return a value, and takes an extra alignment argument.
+ intrinsics do not return a value, takes extra alignment/isvolatile arguments
+ and the pointers can be in specified address spaces.
Arguments:
+
The first argument is a pointer to the destination, the second is a pointer
to the source. The third argument is an integer argument specifying the
- number of bytes to copy, and the fourth argument is the alignment of the
- source and destination locations.
+ number of bytes to copy, the fourth argument is the alignment of the
+ source and destination locations, and the fifth is a boolean indicating a
+ volatile access.
If the call to this intrinsic has an alignment value that is not 0 or 1,
then the caller guarantees that the source and destination pointers are
aligned to that boundary.
+If the isvolatile parameter is true, the
+ llvm.memmove call is a volatile operation.
+ The detailed access behavior is not very cleanly specified and it is unwise
+ to depend on it.
+
Semantics:
+
The 'llvm.memmove.*' intrinsics copy a block of memory from the
source location to the destination location, which may overlap. It copies
"len" bytes of memory over. If the argument is known to be aligned to some
@@ -5988,17 +6099,14 @@ LLVM.
Syntax:
This is an overloaded intrinsic. You can use llvm.memset on any integer bit
- width. Not all targets support all bit widths however.
+ width and for different address spaces. However, not all targets support all
+ bit widths.
- declare void @llvm.memset.i8(i8 * <dest>, i8 <val>,
- i8 <len>, i32 <align>)
- declare void @llvm.memset.i16(i8 * <dest>, i8 <val>,
- i16 <len>, i32 <align>)
- declare void @llvm.memset.i32(i8 * <dest>, i8 <val>,
- i32 <len>, i32 <align>)
- declare void @llvm.memset.i64(i8 * <dest>, i8 <val>,
- i64 <len>, i32 <align>)
+ declare void @llvm.memset.p0i8.i32(i8* <dest>, i8 <val>,
+ i32 <len>, i32 <align>, i1 <isvolatile>)
+ declare void @llvm.memset.p0i8.i64(i8* <dest>, i8 <val>,
+ i64 <len>, i32 <align>, i1 <isvolatile>)
Overview:
@@ -6006,18 +6114,24 @@ LLVM.
particular byte value.
Note that, unlike the standard libc function, the llvm.memset
- intrinsic does not return a value, and takes an extra alignment argument.
+ intrinsic does not return a value and takes extra alignment/volatile
+ arguments. Also, the destination can be in an arbitrary address space.
Arguments:
The first argument is a pointer to the destination to fill, the second is the
- byte value to fill it with, the third argument is an integer argument
+ byte value with which to fill it, the third argument is an integer argument
specifying the number of bytes to fill, and the fourth argument is the known
- alignment of destination location.
+ alignment of the destination location.
If the call to this intrinsic has an alignment value that is not 0 or 1,
then the caller guarantees that the destination pointer is aligned to that
boundary.
+If the isvolatile parameter is true, the
+ llvm.memset call is a volatile operation.
+ The detailed access behavior is not very cleanly specified and it is unwise
+ to depend on it.
+
Semantics:
The 'llvm.memset.*' intrinsics fill "len" bytes of memory starting
at the destination location. If the argument is known to be aligned to some
@@ -6764,7 +6878,8 @@ LLVM.
This intrinsic makes it possible to excise one parameter, marked with
- the nest attribute, from a function. The result is a callable
+ the nest attribute, from a function.
+ The result is a callable
function pointer lacking the nest parameter - the caller does not need to
provide a value for it. Instead, the value to use is stored in advance in a
"trampoline", a block of memory usually allocated on the stack, which also
@@ -6776,17 +6891,15 @@ LLVM.
pointer has signature
i32 (i32, i32)*. It can be created as
follows:
-
-
+
%tramp = alloca [10 x i8], align 4 ; size and alignment only correct for X86
%tramp1 = getelementptr [10 x i8]* %tramp, i32 0, i32 0
- %p = call i8* @llvm.init.trampoline( i8* %tramp1, i8* bitcast (i32 (i8* nest , i32, i32)* @f to i8*), i8* %nval )
+ %p = call i8* @llvm.init.trampoline(i8* %tramp1, i8* bitcast (i32 (i8* nest , i32, i32)* @f to i8*), i8* %nval)
%fp = bitcast i8* %p to i32 (i32, i32)*
-
-
The call %val = call i32 %fp( i32 %x, i32 %y ) is then equivalent
- to %val = call i32 %f( i8* %nval, i32 %x, i32 %y ).
+
The call %val = call i32 %fp(i32 %x, i32 %y) is then equivalent
+ to %val = call i32 %f(i8* %nval, i32 %x, i32 %y).
@@ -6866,7 +6979,7 @@ LLVM.
Syntax:
- declare void @llvm.memory.barrier( i1 <ll>, i1 <ls>, i1 <sl>, i1 <ss>, i1 <device> )
+ declare void @llvm.memory.barrier(i1 <ll>, i1 <ls>, i1 <sl>, i1 <ss>, i1 <device>)
Overview:
@@ -6923,7 +7036,7 @@ LLVM.
store i32 4, %ptr
%result1 = load i32* %ptr
; yields {i32}:result1 = 4
- call void @llvm.memory.barrier( i1 false, i1 true, i1 false, i1 false )
+ call void @llvm.memory.barrier(i1 false, i1 true, i1 false, i1 false)
; guarantee the above finishes
store i32 8, %ptr
; before this begins
@@ -6943,10 +7056,10 @@ LLVM.
support all bit widths however.
- declare i8 @llvm.atomic.cmp.swap.i8.p0i8( i8* <ptr>, i8 <cmp>, i8 <val> )
- declare i16 @llvm.atomic.cmp.swap.i16.p0i16( i16* <ptr>, i16 <cmp>, i16 <val> )
- declare i32 @llvm.atomic.cmp.swap.i32.p0i32( i32* <ptr>, i32 <cmp>, i32 <val> )
- declare i64 @llvm.atomic.cmp.swap.i64.p0i64( i64* <ptr>, i64 <cmp>, i64 <val> )
+ declare i8 @llvm.atomic.cmp.swap.i8.p0i8(i8* <ptr>, i8 <cmp>, i8 <val>)
+ declare i16 @llvm.atomic.cmp.swap.i16.p0i16(i16* <ptr>, i16 <cmp>, i16 <val>)
+ declare i32 @llvm.atomic.cmp.swap.i32.p0i32(i32* <ptr>, i32 <cmp>, i32 <val>)
+ declare i64 @llvm.atomic.cmp.swap.i64.p0i64(i64* <ptr>, i64 <cmp>, i64 <val>)
Overview:
@@ -6975,13 +7088,13 @@ LLVM.
store i32 4, %ptr
%val1 = add i32 4, 4
-%result1 = call i32 @llvm.atomic.cmp.swap.i32.p0i32( i32* %ptr, i32 4, %val1 )
+%result1 = call i32 @llvm.atomic.cmp.swap.i32.p0i32(i32* %ptr, i32 4, %val1)
; yields {i32}:result1 = 4
%stored1 = icmp eq i32 %result1, 4
; yields {i1}:stored1 = true
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 8
%val2 = add i32 1, 1
-%result2 = call i32 @llvm.atomic.cmp.swap.i32.p0i32( i32* %ptr, i32 5, %val2 )
+%result2 = call i32 @llvm.atomic.cmp.swap.i32.p0i32(i32* %ptr, i32 5, %val2)
; yields {i32}:result2 = 8
%stored2 = icmp eq i32 %result2, 5
; yields {i1}:stored2 = false
@@ -7001,10 +7114,10 @@ LLVM.
integer bit width. Not all targets support all bit widths however.
- declare i8 @llvm.atomic.swap.i8.p0i8( i8* <ptr>, i8 <val> )
- declare i16 @llvm.atomic.swap.i16.p0i16( i16* <ptr>, i16 <val> )
- declare i32 @llvm.atomic.swap.i32.p0i32( i32* <ptr>, i32 <val> )
- declare i64 @llvm.atomic.swap.i64.p0i64( i64* <ptr>, i64 <val> )
+ declare i8 @llvm.atomic.swap.i8.p0i8(i8* <ptr>, i8 <val>)
+ declare i16 @llvm.atomic.swap.i16.p0i16(i16* <ptr>, i16 <val>)
+ declare i32 @llvm.atomic.swap.i32.p0i32(i32* <ptr>, i32 <val>)
+ declare i64 @llvm.atomic.swap.i64.p0i64(i64* <ptr>, i64 <val>)
Overview:
@@ -7031,13 +7144,13 @@ LLVM.
store i32 4, %ptr
%val1 = add i32 4, 4
-%result1 = call i32 @llvm.atomic.swap.i32.p0i32( i32* %ptr, i32 %val1 )
+%result1 = call i32 @llvm.atomic.swap.i32.p0i32(i32* %ptr, i32 %val1)
; yields {i32}:result1 = 4
%stored1 = icmp eq i32 %result1, 4
; yields {i1}:stored1 = true
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 8
%val2 = add i32 1, 1
-%result2 = call i32 @llvm.atomic.swap.i32.p0i32( i32* %ptr, i32 %val2 )
+%result2 = call i32 @llvm.atomic.swap.i32.p0i32(i32* %ptr, i32 %val2)
; yields {i32}:result2 = 8
%stored2 = icmp eq i32 %result2, 8
; yields {i1}:stored2 = true
@@ -7059,10 +7172,10 @@ LLVM.
any integer bit width. Not all targets support all bit widths however.
- declare i8 @llvm.atomic.load.add.i8..p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.add.i16..p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.add.i32..p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.add.i64..p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.add.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.add.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.add.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.add.i64.p0i64(i64* <ptr>, i64 <delta>)
Overview:
@@ -7085,11 +7198,11 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 4, %ptr
-%result1 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 4 )
+%result1 = call i32 @llvm.atomic.load.add.i32.p0i32(i32* %ptr, i32 4)
; yields {i32}:result1 = 4
-%result2 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 2 )
+%result2 = call i32 @llvm.atomic.load.add.i32.p0i32(i32* %ptr, i32 2)
; yields {i32}:result2 = 8
-%result3 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 5 )
+%result3 = call i32 @llvm.atomic.load.add.i32.p0i32(i32* %ptr, i32 5)
; yields {i32}:result3 = 10
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 15
@@ -7110,10 +7223,10 @@ LLVM.
support all bit widths however.
- declare i8 @llvm.atomic.load.sub.i8.p0i32( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.sub.i16.p0i32( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.sub.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.sub.i64.p0i32( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.sub.i8.p0i32(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.sub.i16.p0i32(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.sub.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.sub.i64.p0i32(i64* <ptr>, i64 <delta>)
Overview:
@@ -7137,11 +7250,11 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 8, %ptr
-%result1 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 4 )
+%result1 = call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %ptr, i32 4)
; yields {i32}:result1 = 8
-%result2 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 2 )
+%result2 = call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %ptr, i32 2)
; yields {i32}:result2 = 4
-%result3 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 5 )
+%result3 = call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %ptr, i32 5)
; yields {i32}:result3 = 2
%memval1 = load i32* %ptr
; yields {i32}:memval1 = -3
@@ -7166,31 +7279,31 @@ LLVM.
widths however.
- declare i8 @llvm.atomic.load.and.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.and.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.and.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.and.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.and.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.and.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.and.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.and.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.or.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.or.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.or.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.or.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.or.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.or.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.or.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.or.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.nand.i8.p0i32( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.nand.i16.p0i32( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.nand.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.nand.i64.p0i32( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.nand.i8.p0i32(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.nand.i16.p0i32(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.nand.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.nand.i64.p0i32(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.xor.i8.p0i32( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.xor.i16.p0i32( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.xor.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.xor.i64.p0i32( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.xor.i8.p0i32(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.xor.i16.p0i32(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.xor.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.xor.i64.p0i32(i64* <ptr>, i64 <delta>)
Overview:
@@ -7215,13 +7328,13 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 0x0F0F, %ptr
-%result0 = call i32 @llvm.atomic.load.nand.i32.p0i32( i32* %ptr, i32 0xFF )
+%result0 = call i32 @llvm.atomic.load.nand.i32.p0i32(i32* %ptr, i32 0xFF)
; yields {i32}:result0 = 0x0F0F
-%result1 = call i32 @llvm.atomic.load.and.i32.p0i32( i32* %ptr, i32 0xFF )
+%result1 = call i32 @llvm.atomic.load.and.i32.p0i32(i32* %ptr, i32 0xFF)
; yields {i32}:result1 = 0xFFFFFFF0
-%result2 = call i32 @llvm.atomic.load.or.i32.p0i32( i32* %ptr, i32 0F )
+%result2 = call i32 @llvm.atomic.load.or.i32.p0i32(i32* %ptr, i32 0F)
; yields {i32}:result2 = 0xF0
-%result3 = call i32 @llvm.atomic.load.xor.i32.p0i32( i32* %ptr, i32 0F )
+%result3 = call i32 @llvm.atomic.load.xor.i32.p0i32(i32* %ptr, i32 0F)
; yields {i32}:result3 = FF
%memval1 = load i32* %ptr
; yields {i32}:memval1 = F0
@@ -7245,31 +7358,31 @@ LLVM.
address spaces. Not all targets support all bit widths however.
- declare i8 @llvm.atomic.load.max.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.max.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.max.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.max.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.max.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.max.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.max.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.max.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.min.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.min.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.min.i32..p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.min.i64..p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.min.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.min.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.min.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.min.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.umax.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.umax.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.umax.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.umax.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.umax.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.umax.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.umax.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.umax.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.umin.i8..p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.umin.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.umin.i32..p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.umin.i64..p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.umin.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.umin.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.umin.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.umin.i64.p0i64(i64* <ptr>, i64 <delta>)
Overview:
@@ -7294,13 +7407,13 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 7, %ptr
-%result0 = call i32 @llvm.atomic.load.min.i32.p0i32( i32* %ptr, i32 -2 )
+%result0 = call i32 @llvm.atomic.load.min.i32.p0i32(i32* %ptr, i32 -2)
; yields {i32}:result0 = 7
-%result1 = call i32 @llvm.atomic.load.max.i32.p0i32( i32* %ptr, i32 8 )
+%result1 = call i32 @llvm.atomic.load.max.i32.p0i32(i32* %ptr, i32 8)
; yields {i32}:result1 = -2
-%result2 = call i32 @llvm.atomic.load.umin.i32.p0i32( i32* %ptr, i32 10 )
+%result2 = call i32 @llvm.atomic.load.umin.i32.p0i32(i32* %ptr, i32 10)
; yields {i32}:result2 = 8
-%result3 = call i32 @llvm.atomic.load.umax.i32.p0i32( i32* %ptr, i32 30 )
+%result3 = call i32 @llvm.atomic.load.umax.i32.p0i32(i32* %ptr, i32 30)
; yields {i32}:result3 = 8
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 30
@@ -7455,7 +7568,7 @@ LLVM.
Syntax:
- declare void @llvm.var.annotation(i8* <val>, i8* <str>, i8* <str>, i32 <int> )
+ declare void @llvm.var.annotation(i8* <val>, i8* <str>, i8* <str>, i32 <int>)
Overview:
@@ -7486,11 +7599,11 @@ LLVM.
any integer bit width.
- declare i8 @llvm.annotation.i8(i8 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i16 @llvm.annotation.i16(i16 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i32 @llvm.annotation.i32(i32 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i64 @llvm.annotation.i64(i64 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i256 @llvm.annotation.i256(i256 <val>, i8* <str>, i8* <str>, i32 <int> )
+ declare i8 @llvm.annotation.i8(i8 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i16 @llvm.annotation.i16(i16 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i32 @llvm.annotation.i32(i32 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i64 @llvm.annotation.i64(i64 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i256 @llvm.annotation.i256(i256 <val>, i8* <str>, i8* <str>, i32 <int>)
Overview:
@@ -7544,7 +7657,7 @@ LLVM.
Syntax:
- declare void @llvm.stackprotector( i8* <guard>, i8** <slot> )
+ declare void @llvm.stackprotector(i8* <guard>, i8** <slot>)
Overview:
@@ -7563,7 +7676,7 @@ LLVM.
the
AllocaInst stack slot to be before local variables on the
stack. This is to ensure that if a local variable on the stack is
overwritten, it will destroy the value of the guard. When the function exits,
- the guard on the stack is checked against the original guard. If they're
+ the guard on the stack is checked against the original guard. If they are
different, then the program aborts by calling the
__stack_chk_fail()
function.
@@ -7578,30 +7691,29 @@ LLVM.
Syntax:
- declare i32 @llvm.objectsize.i32( i8* <object>, i1 <type> )
- declare i64 @llvm.objectsize.i64( i8* <object>, i1 <type> )
+ declare i32 @llvm.objectsize.i32(i8* <object>, i1 <type>)
+ declare i64 @llvm.objectsize.i64(i8* <object>, i1 <type>)
Overview:
-
The llvm.objectsize intrinsic is designed to provide information
- to the optimizers to discover at compile time either a) when an
- operation like memcpy will either overflow a buffer that corresponds to
- an object, or b) to determine that a runtime check for overflow isn't
- necessary. An object in this context means an allocation of a
- specific class, structure, array, or other object.
+
The llvm.objectsize intrinsic is designed to provide information to
+ the optimizers to determine at compile time whether a) an operation (like
+ memcpy) will overflow a buffer that corresponds to an object, or b) that a
+ runtime check for overflow isn't necessary. An object in this context means
+ an allocation of a specific class, structure, array, or other object.
Arguments:
-
The llvm.objectsize intrinsic takes two arguments. The first
+
The llvm.objectsize intrinsic takes two arguments. The first
argument is a pointer to or into the object. The second argument
- is a boolean 0 or 1. This argument determines whether you want the
- maximum (0) or minimum (1) bytes remaining. This needs to be a literal 0 or
+ is a boolean 0 or 1. This argument determines whether you want the
+ maximum (0) or minimum (1) bytes remaining. This needs to be a literal 0 or
1, variables are not allowed.
Semantics:
The llvm.objectsize intrinsic is lowered to either a constant
- representing the size of the object concerned or i32/i64 -1 or 0
- (depending on the type argument if the size cannot be determined
- at compile time.
+ representing the size of the object concerned, or
i32/i64 -1 or 0,
+ depending on the
type argument, if the size cannot be determined at
+ compile time.