Inline asms with side effects not visible in the constraint list must be
marked as having side effects. This is done through the use of the
'sideeffect' keyword, like so:
-In some cases inline asms will contain code that will not work unless the
stack is aligned in some way, such as calls or SSE instructions on x86,
@@ -2388,11 +2609,9 @@ call void asm sideeffect "eieio", ""()
contain and should generate its usual stack alignment code in the prologue
if the 'alignstack' keyword is present:
-Metadata can be used as function arguments. Here llvm.dbg.value
+ function is using two metadata arguments.
+
+ Metadata can be attached with an instruction. Here metadata !21 is
+ attached with add instruction using !dbg identifier.
+
+ The result is the value at the position in the aggregate specified by the
@@ -3886,17 +4164,16 @@ Instruction
The location of memory pointed to is loaded. If the value being loaded is of
scalar type then the number of bytes read does not exceed the minimum number
@@ -4037,8 +4322,8 @@ Instruction
The optional constant "align" argument specifies the alignment of the
operation (that is, the alignment of the memory address). A value of 0 or an
@@ -4063,6 +4347,15 @@ Instruction
alignment results in an undefined behavior. Underestimating the alignment may
produce less efficient code. An alignment of 1 is always safe.
+The optional !nontemporal metadata must reference a single metatadata
+ name <index> corresponding to a metadata node with one i32 entry of
+ value 1. The existence of the !nontemporal metatadata on the
+ instruction tells the optimizer and code generator that this load is
+ not expected to be reused in the cache. The code generator may
+ select special instructions to save cache bandwidth, such as the
+ MOVNT instruction on x86.
+
+
The first argument is always a pointer, and forms the basis of the
@@ -4108,22 +4401,21 @@ Instruction
indexes the pointer value given as the first argument, the second index
indexes a value of the type pointed to (not necessarily the value directly
pointed to, since the first index can be non-zero), etc. The first type
- indexed into must be a pointer value, subsequent types can be arrays, vectors
- and structs. Note that subsequent types being indexed into can never be
- pointers, since that would require loading the pointer before continuing
- calculation.
+ indexed into must be a pointer value, subsequent types can be arrays,
+ vectors, and structs. Note that subsequent types being indexed into
+ can never be pointers, since that would require loading the pointer before
+ continuing calculation.
The type of each index argument depends on the type it is indexing into.
- When indexing into a (optionally packed) structure, only i32 integer
- constants are allowed. When indexing into an array, pointer or
- vector, integers of any width are allowed, and they are not required to be
+ When indexing into a (optionally packed) structure, only i32
+ integer constants are allowed. When indexing into an array, pointer
+ or vector, integers of any width are allowed, and they are not required to be
constant.
For example, let's consider a C code fragment and how it gets compiled to
LLVM:
-The optional function attributes list. Only
'noreturn', 'nounwind', 'readonly' and
@@ -5084,7 +5377,7 @@ Loop: ; Infinite loop that counts from 0 on up...
Example:
%retval = call i32 @test(i32 %argc)
- call i32 (i8 *, ...)* @printf(i8 * %msg, i32 12, i8 42) ; yields i32
+ call i32 (i8*, ...)* @printf(i8* %msg, i32 12, i8 42) ; yields i32
%X = tail call i32 @foo() ; yields i32
%Y = tail call fastcc i32 @foo() ; yields i32
call void %foo(i8 97 signext)
@@ -5101,7 +5394,7 @@ Loop: ; Infinite loop that counts from 0 on up...
standard C99 library as being the C99 library functions, and may perform
optimizations or generate code for them under that assumption. This is
something we'd like to change in the future to provide better support for
-freestanding environments and non-C-based langauges.
+freestanding environments and non-C-based languages.
@@ -5221,8 +5514,7 @@ freestanding environments and non-C-based langauges.
instruction and the variable argument handling intrinsic functions are
used.
-
-
+
define i32 @test(i32 %X, ...) {
; Initialize variable argument processing
%ap = alloca i8*
@@ -5247,7 +5539,6 @@ declare void @llvm.va_start(i8*)
declare void @llvm.va_copy(i8*, i8*)
declare void @llvm.va_end(i8*)
-
@@ -5517,7 +5808,7 @@ LLVM.
Syntax:
- declare i8 *@llvm.frameaddress(i32 <level>)
+ declare i8* @llvm.frameaddress(i32 <level>)
Overview:
@@ -5551,7 +5842,7 @@ LLVM.
Syntax:
- declare i8 *@llvm.stacksave()
+ declare i8* @llvm.stacksave()
Overview:
@@ -5581,7 +5872,7 @@ LLVM.
Syntax:
- declare void @llvm.stackrestore(i8 * %ptr)
+ declare void @llvm.stackrestore(i8* %ptr)
Overview:
@@ -5657,7 +5948,7 @@ LLVM.
Semantics:
This intrinsic does not modify the behavior of the program. Backends that do
- not support this intrinisic may ignore it.
+ not support this intrinsic may ignore it.
@@ -5670,7 +5961,7 @@ LLVM.
Syntax:
- declare i64 @llvm.readcyclecounter( )
+ declare i64 @llvm.readcyclecounter()
Overview:
@@ -5711,17 +6002,14 @@ LLVM.
Syntax:
This is an overloaded intrinsic. You can use llvm.memcpy on any
- integer bit width. Not all targets support all bit widths however.
+ integer bit width and for different address spaces. Not all targets support
+ all bit widths however.
- declare void @llvm.memcpy.i8(i8 * <dest>, i8 * <src>,
- i8 <len>, i32 <align>)
- declare void @llvm.memcpy.i16(i8 * <dest>, i8 * <src>,
- i16 <len>, i32 <align>)
- declare void @llvm.memcpy.i32(i8 * <dest>, i8 * <src>,
- i32 <len>, i32 <align>)
- declare void @llvm.memcpy.i64(i8 * <dest>, i8 * <src>,
- i64 <len>, i32 <align>)
+ declare void @llvm.memcpy.p0i8.p0i8.i32(i8* <dest>, i8* <src>,
+ i32 <len>, i32 <align>, i1 <isvolatile>)
+ declare void @llvm.memcpy.p0i8.p0i8.i64(i8* <dest>, i8* <src>,
+ i64 <len>, i32 <align>, i1 <isvolatile>)
Overview:
@@ -5729,19 +6017,28 @@ LLVM.
source location to the destination location.
Note that, unlike the standard libc function, the llvm.memcpy.*
- intrinsics do not return a value, and takes an extra alignment argument.
+ intrinsics do not return a value, takes extra alignment/isvolatile arguments
+ and the pointers can be in specified address spaces.
Arguments:
+
The first argument is a pointer to the destination, the second is a pointer
to the source. The third argument is an integer argument specifying the
- number of bytes to copy, and the fourth argument is the alignment of the
- source and destination locations.
+ number of bytes to copy, the fourth argument is the alignment of the
+ source and destination locations, and the fifth is a boolean indicating a
+ volatile access.
-If the call to this intrinisic has an alignment value that is not 0 or 1,
+
If the call to this intrinsic has an alignment value that is not 0 or 1,
then the caller guarantees that both the source and destination pointers are
aligned to that boundary.
+If the isvolatile parameter is true, the
+ llvm.memcpy call is a volatile operation.
+ The detailed access behavior is not very cleanly specified and it is unwise
+ to depend on it.
+
Semantics:
+
The 'llvm.memcpy.*' intrinsics copy a block of memory from the
source location to the destination location, which are not allowed to
overlap. It copies "len" bytes of memory over. If the argument is known to
@@ -5759,17 +6056,14 @@ LLVM.
Syntax:
This is an overloaded intrinsic. You can use llvm.memmove on any integer bit
- width. Not all targets support all bit widths however.
+ width and for different address space. Not all targets support all bit
+ widths however.
- declare void @llvm.memmove.i8(i8 * <dest>, i8 * <src>,
- i8 <len>, i32 <align>)
- declare void @llvm.memmove.i16(i8 * <dest>, i8 * <src>,
- i16 <len>, i32 <align>)
- declare void @llvm.memmove.i32(i8 * <dest>, i8 * <src>,
- i32 <len>, i32 <align>)
- declare void @llvm.memmove.i64(i8 * <dest>, i8 * <src>,
- i64 <len>, i32 <align>)
+ declare void @llvm.memmove.p0i8.p0i8.i32(i8* <dest>, i8* <src>,
+ i32 <len>, i32 <align>, i1 <isvolatile>)
+ declare void @llvm.memmove.p0i8.p0i8.i64(i8* <dest>, i8* <src>,
+ i64 <len>, i32 <align>, i1 <isvolatile>)
Overview:
@@ -5779,19 +6073,28 @@ LLVM.
overlap.
Note that, unlike the standard libc function, the llvm.memmove.*
- intrinsics do not return a value, and takes an extra alignment argument.
+ intrinsics do not return a value, takes extra alignment/isvolatile arguments
+ and the pointers can be in specified address spaces.
Arguments:
+
The first argument is a pointer to the destination, the second is a pointer
to the source. The third argument is an integer argument specifying the
- number of bytes to copy, and the fourth argument is the alignment of the
- source and destination locations.
+ number of bytes to copy, the fourth argument is the alignment of the
+ source and destination locations, and the fifth is a boolean indicating a
+ volatile access.
-If the call to this intrinisic has an alignment value that is not 0 or 1,
+
If the call to this intrinsic has an alignment value that is not 0 or 1,
then the caller guarantees that the source and destination pointers are
aligned to that boundary.
+If the isvolatile parameter is true, the
+ llvm.memmove call is a volatile operation.
+ The detailed access behavior is not very cleanly specified and it is unwise
+ to depend on it.
+
Semantics:
+
The 'llvm.memmove.*' intrinsics copy a block of memory from the
source location to the destination location, which may overlap. It copies
"len" bytes of memory over. If the argument is known to be aligned to some
@@ -5809,17 +6112,14 @@ LLVM.
Syntax:
This is an overloaded intrinsic. You can use llvm.memset on any integer bit
- width. Not all targets support all bit widths however.
+ width and for different address spaces. However, not all targets support all
+ bit widths.
- declare void @llvm.memset.i8(i8 * <dest>, i8 <val>,
- i8 <len>, i32 <align>)
- declare void @llvm.memset.i16(i8 * <dest>, i8 <val>,
- i16 <len>, i32 <align>)
- declare void @llvm.memset.i32(i8 * <dest>, i8 <val>,
- i32 <len>, i32 <align>)
- declare void @llvm.memset.i64(i8 * <dest>, i8 <val>,
- i64 <len>, i32 <align>)
+ declare void @llvm.memset.p0i8.i32(i8* <dest>, i8 <val>,
+ i32 <len>, i32 <align>, i1 <isvolatile>)
+ declare void @llvm.memset.p0i8.i64(i8* <dest>, i8 <val>,
+ i64 <len>, i32 <align>, i1 <isvolatile>)
Overview:
@@ -5827,18 +6127,24 @@ LLVM.
particular byte value.
Note that, unlike the standard libc function, the llvm.memset
- intrinsic does not return a value, and takes an extra alignment argument.
+ intrinsic does not return a value and takes extra alignment/volatile
+ arguments. Also, the destination can be in an arbitrary address space.
Arguments:
The first argument is a pointer to the destination to fill, the second is the
- byte value to fill it with, the third argument is an integer argument
+ byte value with which to fill it, the third argument is an integer argument
specifying the number of bytes to fill, and the fourth argument is the known
- alignment of destination location.
+ alignment of the destination location.
-If the call to this intrinisic has an alignment value that is not 0 or 1,
+
If the call to this intrinsic has an alignment value that is not 0 or 1,
then the caller guarantees that the destination pointer is aligned to that
boundary.
+If the isvolatile parameter is true, the
+ llvm.memset call is a volatile operation.
+ The detailed access behavior is not very cleanly specified and it is unwise
+ to depend on it.
+
Semantics:
The 'llvm.memset.*' intrinsics fill "len" bytes of memory starting
at the destination location. If the argument is known to be aligned to some
@@ -6458,6 +6764,97 @@ LLVM.
+
+
+
+
+
+
Half precision floating point is a storage-only format. This means that it is
+ a dense encoding (in memory) but does not support computation in the
+ format.
+
+
This means that code must first load the half-precision floating point
+ value as an i16, then convert it to float with llvm.convert.from.fp16.
+ Computation can then be performed on the float value (including extending to
+ double etc). To store the value back to memory, it is first converted to
+ float if needed, then converted to i16 with
+ llvm.convert.to.fp16, then
+ storing as an i16 value.
+
+
+
+
+
+
+
+
Syntax:
+
+ declare i16 @llvm.convert.to.fp16(f32 %a)
+
+
+
Overview:
+
The 'llvm.convert.to.fp16' intrinsic function performs
+ a conversion from single precision floating point format to half precision
+ floating point format.
+
+
Arguments:
+
The intrinsic function contains single argument - the value to be
+ converted.
+
+
Semantics:
+
The 'llvm.convert.to.fp16' intrinsic function performs
+ a conversion from single precision floating point format to half precision
+ floating point format. The return value is an i16 which
+ contains the converted number.
+
+
Examples:
+
+ %res = call i16 @llvm.convert.to.fp16(f32 %a)
+ store i16 %res, i16* @x, align 2
+
+
+
+
+
+
+
+
+
+
Syntax:
+
+ declare f32 @llvm.convert.from.fp16(i16 %a)
+
+
+
Overview:
+
The 'llvm.convert.from.fp16' intrinsic function performs
+ a conversion from half precision floating point format to single precision
+ floating point format.
+
+
Arguments:
+
The intrinsic function contains single argument - the value to be
+ converted.
+
+
Semantics:
+
The 'llvm.convert.from.fp16' intrinsic function performs a
+ conversion from half single precision floating point format to single
+ precision floating point format. The input half-float value is represented by
+ an i16 value.
+
+
Examples:
+
+ %a = load i16* @x, align 2
+ %res = call f32 @llvm.convert.from.fp16(i16 %a)
+
+
+
+
Debugger Intrinsics
@@ -6494,7 +6891,8 @@ LLVM.
This intrinsic makes it possible to excise one parameter, marked with
- the nest attribute, from a function. The result is a callable
+ the nest attribute, from a function.
+ The result is a callable
function pointer lacking the nest parameter - the caller does not need to
provide a value for it. Instead, the value to use is stored in advance in a
"trampoline", a block of memory usually allocated on the stack, which also
@@ -6506,17 +6904,15 @@ LLVM.
pointer has signature
i32 (i32, i32)*. It can be created as
follows:
-
-
+
%tramp = alloca [10 x i8], align 4 ; size and alignment only correct for X86
%tramp1 = getelementptr [10 x i8]* %tramp, i32 0, i32 0
- %p = call i8* @llvm.init.trampoline( i8* %tramp1, i8* bitcast (i32 (i8* nest , i32, i32)* @f to i8*), i8* %nval )
+ %p = call i8* @llvm.init.trampoline(i8* %tramp1, i8* bitcast (i32 (i8* nest , i32, i32)* @f to i8*), i8* %nval)
%fp = bitcast i8* %p to i32 (i32, i32)*
-
-
The call %val = call i32 %fp( i32 %x, i32 %y ) is then equivalent
- to %val = call i32 %f( i8* %nval, i32 %x, i32 %y ).
+
The call %val = call i32 %fp(i32 %x, i32 %y) is then equivalent
+ to %val = call i32 %f(i8* %nval, i32 %x, i32 %y).
@@ -6596,7 +6992,7 @@ LLVM.
Syntax:
- declare void @llvm.memory.barrier( i1 <ll>, i1 <ls>, i1 <sl>, i1 <ss>, i1 <device> )
+ declare void @llvm.memory.barrier(i1 <ll>, i1 <ls>, i1 <sl>, i1 <ss>, i1 <device>)
Overview:
@@ -6606,7 +7002,7 @@ LLVM.
Arguments:
The llvm.memory.barrier intrinsic requires five boolean arguments.
The first four arguments enables a specific barrier as listed below. The
- fith argument specifies that the barrier applies to io or device or uncached
+ fifth argument specifies that the barrier applies to io or device or uncached
memory.
@@ -6653,7 +7049,7 @@ LLVM.
store i32 4, %ptr
%result1 = load i32* %ptr ; yields {i32}:result1 = 4
- call void @llvm.memory.barrier( i1 false, i1 true, i1 false, i1 false )
+ call void @llvm.memory.barrier(i1 false, i1 true, i1 false, i1 false)
; guarantee the above finishes
store i32 8, %ptr ; before this begins
@@ -6673,10 +7069,10 @@ LLVM.
support all bit widths however.
- declare i8 @llvm.atomic.cmp.swap.i8.p0i8( i8* <ptr>, i8 <cmp>, i8 <val> )
- declare i16 @llvm.atomic.cmp.swap.i16.p0i16( i16* <ptr>, i16 <cmp>, i16 <val> )
- declare i32 @llvm.atomic.cmp.swap.i32.p0i32( i32* <ptr>, i32 <cmp>, i32 <val> )
- declare i64 @llvm.atomic.cmp.swap.i64.p0i64( i64* <ptr>, i64 <cmp>, i64 <val> )
+ declare i8 @llvm.atomic.cmp.swap.i8.p0i8(i8* <ptr>, i8 <cmp>, i8 <val>)
+ declare i16 @llvm.atomic.cmp.swap.i16.p0i16(i16* <ptr>, i16 <cmp>, i16 <val>)
+ declare i32 @llvm.atomic.cmp.swap.i32.p0i32(i32* <ptr>, i32 <cmp>, i32 <val>)
+ declare i64 @llvm.atomic.cmp.swap.i64.p0i64(i64* <ptr>, i64 <cmp>, i64 <val>)
Overview:
@@ -6705,13 +7101,13 @@ LLVM.
store i32 4, %ptr
%val1 = add i32 4, 4
-%result1 = call i32 @llvm.atomic.cmp.swap.i32.p0i32( i32* %ptr, i32 4, %val1 )
+%result1 = call i32 @llvm.atomic.cmp.swap.i32.p0i32(i32* %ptr, i32 4, %val1)
; yields {i32}:result1 = 4
%stored1 = icmp eq i32 %result1, 4
; yields {i1}:stored1 = true
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 8
%val2 = add i32 1, 1
-%result2 = call i32 @llvm.atomic.cmp.swap.i32.p0i32( i32* %ptr, i32 5, %val2 )
+%result2 = call i32 @llvm.atomic.cmp.swap.i32.p0i32(i32* %ptr, i32 5, %val2)
; yields {i32}:result2 = 8
%stored2 = icmp eq i32 %result2, 5
; yields {i1}:stored2 = false
@@ -6731,10 +7127,10 @@ LLVM.
integer bit width. Not all targets support all bit widths however.
- declare i8 @llvm.atomic.swap.i8.p0i8( i8* <ptr>, i8 <val> )
- declare i16 @llvm.atomic.swap.i16.p0i16( i16* <ptr>, i16 <val> )
- declare i32 @llvm.atomic.swap.i32.p0i32( i32* <ptr>, i32 <val> )
- declare i64 @llvm.atomic.swap.i64.p0i64( i64* <ptr>, i64 <val> )
+ declare i8 @llvm.atomic.swap.i8.p0i8(i8* <ptr>, i8 <val>)
+ declare i16 @llvm.atomic.swap.i16.p0i16(i16* <ptr>, i16 <val>)
+ declare i32 @llvm.atomic.swap.i32.p0i32(i32* <ptr>, i32 <val>)
+ declare i64 @llvm.atomic.swap.i64.p0i64(i64* <ptr>, i64 <val>)
Overview:
@@ -6761,13 +7157,13 @@ LLVM.
store i32 4, %ptr
%val1 = add i32 4, 4
-%result1 = call i32 @llvm.atomic.swap.i32.p0i32( i32* %ptr, i32 %val1 )
+%result1 = call i32 @llvm.atomic.swap.i32.p0i32(i32* %ptr, i32 %val1)
; yields {i32}:result1 = 4
%stored1 = icmp eq i32 %result1, 4
; yields {i1}:stored1 = true
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 8
%val2 = add i32 1, 1
-%result2 = call i32 @llvm.atomic.swap.i32.p0i32( i32* %ptr, i32 %val2 )
+%result2 = call i32 @llvm.atomic.swap.i32.p0i32(i32* %ptr, i32 %val2)
; yields {i32}:result2 = 8
%stored2 = icmp eq i32 %result2, 8
; yields {i1}:stored2 = true
@@ -6789,10 +7185,10 @@ LLVM.
any integer bit width. Not all targets support all bit widths however.
- declare i8 @llvm.atomic.load.add.i8..p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.add.i16..p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.add.i32..p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.add.i64..p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.add.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.add.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.add.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.add.i64.p0i64(i64* <ptr>, i64 <delta>)
Overview:
@@ -6815,11 +7211,11 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 4, %ptr
-%result1 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 4 )
+%result1 = call i32 @llvm.atomic.load.add.i32.p0i32(i32* %ptr, i32 4)
; yields {i32}:result1 = 4
-%result2 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 2 )
+%result2 = call i32 @llvm.atomic.load.add.i32.p0i32(i32* %ptr, i32 2)
; yields {i32}:result2 = 8
-%result3 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 5 )
+%result3 = call i32 @llvm.atomic.load.add.i32.p0i32(i32* %ptr, i32 5)
; yields {i32}:result3 = 10
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 15
@@ -6840,10 +7236,10 @@ LLVM.
support all bit widths however.
- declare i8 @llvm.atomic.load.sub.i8.p0i32( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.sub.i16.p0i32( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.sub.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.sub.i64.p0i32( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.sub.i8.p0i32(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.sub.i16.p0i32(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.sub.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.sub.i64.p0i32(i64* <ptr>, i64 <delta>)
Overview:
@@ -6867,11 +7263,11 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 8, %ptr
-%result1 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 4 )
+%result1 = call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %ptr, i32 4)
; yields {i32}:result1 = 8
-%result2 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 2 )
+%result2 = call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %ptr, i32 2)
; yields {i32}:result2 = 4
-%result3 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 5 )
+%result3 = call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %ptr, i32 5)
; yields {i32}:result3 = 2
%memval1 = load i32* %ptr
; yields {i32}:memval1 = -3
@@ -6896,31 +7292,31 @@ LLVM.
widths however.
- declare i8 @llvm.atomic.load.and.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.and.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.and.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.and.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.and.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.and.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.and.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.and.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.or.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.or.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.or.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.or.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.or.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.or.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.or.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.or.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.nand.i8.p0i32( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.nand.i16.p0i32( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.nand.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.nand.i64.p0i32( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.nand.i8.p0i32(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.nand.i16.p0i32(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.nand.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.nand.i64.p0i32(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.xor.i8.p0i32( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.xor.i16.p0i32( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.xor.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.xor.i64.p0i32( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.xor.i8.p0i32(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.xor.i16.p0i32(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.xor.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.xor.i64.p0i32(i64* <ptr>, i64 <delta>)
Overview:
@@ -6945,13 +7341,13 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 0x0F0F, %ptr
-%result0 = call i32 @llvm.atomic.load.nand.i32.p0i32( i32* %ptr, i32 0xFF )
+%result0 = call i32 @llvm.atomic.load.nand.i32.p0i32(i32* %ptr, i32 0xFF)
; yields {i32}:result0 = 0x0F0F
-%result1 = call i32 @llvm.atomic.load.and.i32.p0i32( i32* %ptr, i32 0xFF )
+%result1 = call i32 @llvm.atomic.load.and.i32.p0i32(i32* %ptr, i32 0xFF)
; yields {i32}:result1 = 0xFFFFFFF0
-%result2 = call i32 @llvm.atomic.load.or.i32.p0i32( i32* %ptr, i32 0F )
+%result2 = call i32 @llvm.atomic.load.or.i32.p0i32(i32* %ptr, i32 0F)
; yields {i32}:result2 = 0xF0
-%result3 = call i32 @llvm.atomic.load.xor.i32.p0i32( i32* %ptr, i32 0F )
+%result3 = call i32 @llvm.atomic.load.xor.i32.p0i32(i32* %ptr, i32 0F)
; yields {i32}:result3 = FF
%memval1 = load i32* %ptr
; yields {i32}:memval1 = F0
@@ -6975,31 +7371,31 @@ LLVM.
address spaces. Not all targets support all bit widths however.
- declare i8 @llvm.atomic.load.max.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.max.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.max.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.max.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.max.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.max.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.max.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.max.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.min.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.min.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.min.i32..p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.min.i64..p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.min.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.min.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.min.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.min.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.umax.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.umax.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.umax.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.umax.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.umax.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.umax.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.umax.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.umax.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.umin.i8..p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.umin.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.umin.i32..p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.umin.i64..p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.umin.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.umin.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.umin.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.umin.i64.p0i64(i64* <ptr>, i64 <delta>)
Overview:
@@ -7024,13 +7420,13 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 7, %ptr
-%result0 = call i32 @llvm.atomic.load.min.i32.p0i32( i32* %ptr, i32 -2 )
+%result0 = call i32 @llvm.atomic.load.min.i32.p0i32(i32* %ptr, i32 -2)
; yields {i32}:result0 = 7
-%result1 = call i32 @llvm.atomic.load.max.i32.p0i32( i32* %ptr, i32 8 )
+%result1 = call i32 @llvm.atomic.load.max.i32.p0i32(i32* %ptr, i32 8)
; yields {i32}:result1 = -2
-%result2 = call i32 @llvm.atomic.load.umin.i32.p0i32( i32* %ptr, i32 10 )
+%result2 = call i32 @llvm.atomic.load.umin.i32.p0i32(i32* %ptr, i32 10)
; yields {i32}:result2 = 8
-%result3 = call i32 @llvm.atomic.load.umax.i32.p0i32( i32* %ptr, i32 30 )
+%result3 = call i32 @llvm.atomic.load.umax.i32.p0i32(i32* %ptr, i32 30)
; yields {i32}:result3 = 8
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 30
@@ -7118,7 +7514,7 @@ LLVM.
Syntax:
- declare {}* @llvm.invariant.start(i64 <size>, i8* nocapture <ptr>) readonly
+ declare {}* @llvm.invariant.start(i64 <size>, i8* nocapture <ptr>)
Overview:
@@ -7185,7 +7581,7 @@ LLVM.
Syntax:
- declare void @llvm.var.annotation(i8* <val>, i8* <str>, i8* <str>, i32 <int> )
+ declare void @llvm.var.annotation(i8* <val>, i8* <str>, i8* <str>, i32 <int>)
Overview:
@@ -7216,11 +7612,11 @@ LLVM.
any integer bit width.
- declare i8 @llvm.annotation.i8(i8 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i16 @llvm.annotation.i16(i16 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i32 @llvm.annotation.i32(i32 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i64 @llvm.annotation.i64(i64 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i256 @llvm.annotation.i256(i256 <val>, i8* <str>, i8* <str>, i32 <int> )
+ declare i8 @llvm.annotation.i8(i8 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i16 @llvm.annotation.i16(i16 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i32 @llvm.annotation.i32(i32 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i64 @llvm.annotation.i64(i64 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i256 @llvm.annotation.i256(i256 <val>, i8* <str>, i8* <str>, i32 <int>)
Overview:
@@ -7274,7 +7670,7 @@ LLVM.
Syntax:
- declare void @llvm.stackprotector( i8* <guard>, i8** <slot> )
+ declare void @llvm.stackprotector(i8* <guard>, i8** <slot>)
Overview:
@@ -7293,7 +7689,7 @@ LLVM.
the
AllocaInst stack slot to be before local variables on the
stack. This is to ensure that if a local variable on the stack is
overwritten, it will destroy the value of the guard. When the function exits,
- the guard on the stack is checked against the original guard. If they're
+ the guard on the stack is checked against the original guard. If they are
different, then the program aborts by calling the
__stack_chk_fail()
function.
@@ -7308,30 +7704,29 @@ LLVM.
Syntax:
- declare i32 @llvm.objectsize.i32( i8* <object>, i1 <type> )
- declare i64 @llvm.objectsize.i64( i8* <object>, i1 <type> )
+ declare i32 @llvm.objectsize.i32(i8* <object>, i1 <type>)
+ declare i64 @llvm.objectsize.i64(i8* <object>, i1 <type>)
Overview:
-
The llvm.objectsize intrinsic is designed to provide information
- to the optimizers to discover at compile time either a) when an
- operation like memcpy will either overflow a buffer that corresponds to
- an object, or b) to determine that a runtime check for overflow isn't
- necessary. An object in this context means an allocation of a
- specific class, structure, array, or other object.
+
The llvm.objectsize intrinsic is designed to provide information to
+ the optimizers to determine at compile time whether a) an operation (like
+ memcpy) will overflow a buffer that corresponds to an object, or b) that a
+ runtime check for overflow isn't necessary. An object in this context means
+ an allocation of a specific class, structure, array, or other object.
Arguments:
-
The llvm.objectsize intrinsic takes two arguments. The first
+
The llvm.objectsize intrinsic takes two arguments. The first
argument is a pointer to or into the object. The second argument
- is a boolean 0 or 1. This argument determines whether you want the
- maximum (0) or minimum (1) bytes remaining. This needs to be a literal 0 or
+ is a boolean 0 or 1. This argument determines whether you want the
+ maximum (0) or minimum (1) bytes remaining. This needs to be a literal 0 or
1, variables are not allowed.
Semantics:
The llvm.objectsize intrinsic is lowered to either a constant
- representing the size of the object concerned or i32/i64 -1 or 0
- (depending on the type argument if the size cannot be determined
- at compile time.
+ representing the size of the object concerned, or
i32/i64 -1 or 0,
+ depending on the
type argument, if the size cannot be determined at
+ compile time.