The optional function attributes list. Only
'noreturn', 'nounwind', 'readonly' and
@@ -5033,7 +5364,7 @@ Loop: ; Infinite loop that counts from 0 on up...
Example:
%retval = call i32 @test(i32 %argc)
- call i32 (i8 *, ...)* @printf(i8 * %msg, i32 12, i8 42) ; yields i32
+ call i32 (i8*, ...)* @printf(i8* %msg, i32 12, i8 42) ; yields i32
%X = tail call i32 @foo() ; yields i32
%Y = tail call fastcc i32 @foo() ; yields i32
call void %foo(i8 97 signext)
@@ -5050,7 +5381,7 @@ Loop: ; Infinite loop that counts from 0 on up...
standard C99 library as being the C99 library functions, and may perform
optimizations or generate code for them under that assumption. This is
something we'd like to change in the future to provide better support for
-freestanding environments and non-C-based langauges.
+freestanding environments and non-C-based languages.
@@ -5170,8 +5501,7 @@ freestanding environments and non-C-based langauges.
instruction and the variable argument handling intrinsic functions are
used.
-
-
+
define i32 @test(i32 %X, ...) {
; Initialize variable argument processing
%ap = alloca i8*
@@ -5196,7 +5526,6 @@ declare void @llvm.va_start(i8*)
declare void @llvm.va_copy(i8*, i8*)
declare void @llvm.va_end(i8*)
-
@@ -5466,7 +5795,7 @@ LLVM.
Syntax:
- declare i8 *@llvm.frameaddress(i32 <level>)
+ declare i8* @llvm.frameaddress(i32 <level>)
Overview:
@@ -5500,7 +5829,7 @@ LLVM.
Syntax:
- declare i8 *@llvm.stacksave()
+ declare i8* @llvm.stacksave()
Overview:
@@ -5530,7 +5859,7 @@ LLVM.
Syntax:
- declare void @llvm.stackrestore(i8 * %ptr)
+ declare void @llvm.stackrestore(i8* %ptr)
Overview:
@@ -5606,7 +5935,7 @@ LLVM.
Semantics:
This intrinsic does not modify the behavior of the program. Backends that do
- not support this intrinisic may ignore it.
+ not support this intrinsic may ignore it.
@@ -5619,7 +5948,7 @@ LLVM.
Syntax:
- declare i64 @llvm.readcyclecounter( )
+ declare i64 @llvm.readcyclecounter()
Overview:
@@ -5660,17 +5989,14 @@ LLVM.
Syntax:
This is an overloaded intrinsic. You can use llvm.memcpy on any
- integer bit width. Not all targets support all bit widths however.
+ integer bit width and for different address spaces. Not all targets support
+ all bit widths however.
- declare void @llvm.memcpy.i8(i8 * <dest>, i8 * <src>,
- i8 <len>, i32 <align>)
- declare void @llvm.memcpy.i16(i8 * <dest>, i8 * <src>,
- i16 <len>, i32 <align>)
- declare void @llvm.memcpy.i32(i8 * <dest>, i8 * <src>,
- i32 <len>, i32 <align>)
- declare void @llvm.memcpy.i64(i8 * <dest>, i8 * <src>,
- i64 <len>, i32 <align>)
+ declare void @llvm.memcpy.p0i8.p0i8.i32(i8* <dest>, i8* <src>,
+ i32 <len>, i32 <align>, i1 <isvolatile>)
+ declare void @llvm.memcpy.p0i8.p0i8.i64(i8* <dest>, i8* <src>,
+ i64 <len>, i32 <align>, i1 <isvolatile>)
Overview:
@@ -5678,19 +6004,28 @@ LLVM.
source location to the destination location.
Note that, unlike the standard libc function, the llvm.memcpy.*
- intrinsics do not return a value, and takes an extra alignment argument.
+ intrinsics do not return a value, takes extra alignment/isvolatile arguments
+ and the pointers can be in specified address spaces.
Arguments:
+
The first argument is a pointer to the destination, the second is a pointer
to the source. The third argument is an integer argument specifying the
- number of bytes to copy, and the fourth argument is the alignment of the
- source and destination locations.
+ number of bytes to copy, the fourth argument is the alignment of the
+ source and destination locations, and the fifth is a boolean indicating a
+ volatile access.
-If the call to this intrinisic has an alignment value that is not 0 or 1,
+
If the call to this intrinsic has an alignment value that is not 0 or 1,
then the caller guarantees that both the source and destination pointers are
aligned to that boundary.
+If the isvolatile parameter is true, the
+ llvm.memcpy call is a volatile operation.
+ The detailed access behavior is not very cleanly specified and it is unwise
+ to depend on it.
+
Semantics:
+
The 'llvm.memcpy.*' intrinsics copy a block of memory from the
source location to the destination location, which are not allowed to
overlap. It copies "len" bytes of memory over. If the argument is known to
@@ -5708,17 +6043,14 @@ LLVM.
Syntax:
This is an overloaded intrinsic. You can use llvm.memmove on any integer bit
- width. Not all targets support all bit widths however.
+ width and for different address space. Not all targets support all bit
+ widths however.
- declare void @llvm.memmove.i8(i8 * <dest>, i8 * <src>,
- i8 <len>, i32 <align>)
- declare void @llvm.memmove.i16(i8 * <dest>, i8 * <src>,
- i16 <len>, i32 <align>)
- declare void @llvm.memmove.i32(i8 * <dest>, i8 * <src>,
- i32 <len>, i32 <align>)
- declare void @llvm.memmove.i64(i8 * <dest>, i8 * <src>,
- i64 <len>, i32 <align>)
+ declare void @llvm.memmove.p0i8.p0i8.i32(i8* <dest>, i8* <src>,
+ i32 <len>, i32 <align>, i1 <isvolatile>)
+ declare void @llvm.memmove.p0i8.p0i8.i64(i8* <dest>, i8* <src>,
+ i64 <len>, i32 <align>, i1 <isvolatile>)
Overview:
@@ -5728,19 +6060,28 @@ LLVM.
overlap.
Note that, unlike the standard libc function, the llvm.memmove.*
- intrinsics do not return a value, and takes an extra alignment argument.
+ intrinsics do not return a value, takes extra alignment/isvolatile arguments
+ and the pointers can be in specified address spaces.
Arguments:
+
The first argument is a pointer to the destination, the second is a pointer
to the source. The third argument is an integer argument specifying the
- number of bytes to copy, and the fourth argument is the alignment of the
- source and destination locations.
+ number of bytes to copy, the fourth argument is the alignment of the
+ source and destination locations, and the fifth is a boolean indicating a
+ volatile access.
-If the call to this intrinisic has an alignment value that is not 0 or 1,
+
If the call to this intrinsic has an alignment value that is not 0 or 1,
then the caller guarantees that the source and destination pointers are
aligned to that boundary.
+If the isvolatile parameter is true, the
+ llvm.memmove call is a volatile operation.
+ The detailed access behavior is not very cleanly specified and it is unwise
+ to depend on it.
+
Semantics:
+
The 'llvm.memmove.*' intrinsics copy a block of memory from the
source location to the destination location, which may overlap. It copies
"len" bytes of memory over. If the argument is known to be aligned to some
@@ -5758,17 +6099,14 @@ LLVM.
Syntax:
This is an overloaded intrinsic. You can use llvm.memset on any integer bit
- width. Not all targets support all bit widths however.
+ width and for different address spaces. However, not all targets support all
+ bit widths.
- declare void @llvm.memset.i8(i8 * <dest>, i8 <val>,
- i8 <len>, i32 <align>)
- declare void @llvm.memset.i16(i8 * <dest>, i8 <val>,
- i16 <len>, i32 <align>)
- declare void @llvm.memset.i32(i8 * <dest>, i8 <val>,
- i32 <len>, i32 <align>)
- declare void @llvm.memset.i64(i8 * <dest>, i8 <val>,
- i64 <len>, i32 <align>)
+ declare void @llvm.memset.p0i8.i32(i8* <dest>, i8 <val>,
+ i32 <len>, i32 <align>, i1 <isvolatile>)
+ declare void @llvm.memset.p0i8.i64(i8* <dest>, i8 <val>,
+ i64 <len>, i32 <align>, i1 <isvolatile>)
Overview:
@@ -5776,18 +6114,24 @@ LLVM.
particular byte value.
Note that, unlike the standard libc function, the llvm.memset
- intrinsic does not return a value, and takes an extra alignment argument.
+ intrinsic does not return a value and takes extra alignment/volatile
+ arguments. Also, the destination can be in an arbitrary address space.
Arguments:
The first argument is a pointer to the destination to fill, the second is the
- byte value to fill it with, the third argument is an integer argument
+ byte value with which to fill it, the third argument is an integer argument
specifying the number of bytes to fill, and the fourth argument is the known
- alignment of destination location.
+ alignment of the destination location.
-If the call to this intrinisic has an alignment value that is not 0 or 1,
+
If the call to this intrinsic has an alignment value that is not 0 or 1,
then the caller guarantees that the destination pointer is aligned to that
boundary.
+If the isvolatile parameter is true, the
+ llvm.memset call is a volatile operation.
+ The detailed access behavior is not very cleanly specified and it is unwise
+ to depend on it.
+
Semantics:
The 'llvm.memset.*' intrinsics fill "len" bytes of memory starting
at the destination location. If the argument is known to be aligned to some
@@ -6407,6 +6751,97 @@ LLVM.
+
+
+
+
+
+
Half precision floating point is a storage-only format. This means that it is
+ a dense encoding (in memory) but does not support computation in the
+ format.
+
+
This means that code must first load the half-precision floating point
+ value as an i16, then convert it to float with llvm.convert.from.fp16.
+ Computation can then be performed on the float value (including extending to
+ double etc). To store the value back to memory, it is first converted to
+ float if needed, then converted to i16 with
+ llvm.convert.to.fp16, then
+ storing as an i16 value.
+
+
+
+
+
+
+
+
Syntax:
+
+ declare i16 @llvm.convert.to.fp16(f32 %a)
+
+
+
Overview:
+
The 'llvm.convert.to.fp16' intrinsic function performs
+ a conversion from single precision floating point format to half precision
+ floating point format.
+
+
Arguments:
+
The intrinsic function contains single argument - the value to be
+ converted.
+
+
Semantics:
+
The 'llvm.convert.to.fp16' intrinsic function performs
+ a conversion from single precision floating point format to half precision
+ floating point format. The return value is an i16 which
+ contains the converted number.
+
+
Examples:
+
+ %res = call i16 @llvm.convert.to.fp16(f32 %a)
+ store i16 %res, i16* @x, align 2
+
+
+
+
+
+
+
+
+
+
Syntax:
+
+ declare f32 @llvm.convert.from.fp16(i16 %a)
+
+
+
Overview:
+
The 'llvm.convert.from.fp16' intrinsic function performs
+ a conversion from half precision floating point format to single precision
+ floating point format.
+
+
Arguments:
+
The intrinsic function contains single argument - the value to be
+ converted.
+
+
Semantics:
+
The 'llvm.convert.from.fp16' intrinsic function performs a
+ conversion from half single precision floating point format to single
+ precision floating point format. The input half-float value is represented by
+ an i16 value.
+
+
Examples:
+
+ %a = load i16* @x, align 2
+ %res = call f32 @llvm.convert.from.fp16(i16 %a)
+
+
+
+
Debugger Intrinsics
@@ -6443,7 +6878,8 @@ LLVM.
This intrinsic makes it possible to excise one parameter, marked with
- the nest attribute, from a function. The result is a callable
+ the nest attribute, from a function.
+ The result is a callable
function pointer lacking the nest parameter - the caller does not need to
provide a value for it. Instead, the value to use is stored in advance in a
"trampoline", a block of memory usually allocated on the stack, which also
@@ -6455,17 +6891,15 @@ LLVM.
pointer has signature
i32 (i32, i32)*. It can be created as
follows:
-
-
+
%tramp = alloca [10 x i8], align 4 ; size and alignment only correct for X86
%tramp1 = getelementptr [10 x i8]* %tramp, i32 0, i32 0
- %p = call i8* @llvm.init.trampoline( i8* %tramp1, i8* bitcast (i32 (i8* nest , i32, i32)* @f to i8*), i8* %nval )
+ %p = call i8* @llvm.init.trampoline(i8* %tramp1, i8* bitcast (i32 (i8* nest , i32, i32)* @f to i8*), i8* %nval)
%fp = bitcast i8* %p to i32 (i32, i32)*
-
-
The call %val = call i32 %fp( i32 %x, i32 %y ) is then equivalent
- to %val = call i32 %f( i8* %nval, i32 %x, i32 %y ).
+
The call %val = call i32 %fp(i32 %x, i32 %y) is then equivalent
+ to %val = call i32 %f(i8* %nval, i32 %x, i32 %y).
@@ -6545,7 +6979,7 @@ LLVM.
Syntax:
- declare void @llvm.memory.barrier( i1 <ll>, i1 <ls>, i1 <sl>, i1 <ss>, i1 <device> )
+ declare void @llvm.memory.barrier(i1 <ll>, i1 <ls>, i1 <sl>, i1 <ss>, i1 <device>)
Overview:
@@ -6555,7 +6989,7 @@ LLVM.
Arguments:
The llvm.memory.barrier intrinsic requires five boolean arguments.
The first four arguments enables a specific barrier as listed below. The
- fith argument specifies that the barrier applies to io or device or uncached
+ fifth argument specifies that the barrier applies to io or device or uncached
memory.
@@ -6602,7 +7036,7 @@ LLVM.
store i32 4, %ptr
%result1 = load i32* %ptr ; yields {i32}:result1 = 4
- call void @llvm.memory.barrier( i1 false, i1 true, i1 false, i1 false )
+ call void @llvm.memory.barrier(i1 false, i1 true, i1 false, i1 false)
; guarantee the above finishes
store i32 8, %ptr ; before this begins
@@ -6622,10 +7056,10 @@ LLVM.
support all bit widths however.
- declare i8 @llvm.atomic.cmp.swap.i8.p0i8( i8* <ptr>, i8 <cmp>, i8 <val> )
- declare i16 @llvm.atomic.cmp.swap.i16.p0i16( i16* <ptr>, i16 <cmp>, i16 <val> )
- declare i32 @llvm.atomic.cmp.swap.i32.p0i32( i32* <ptr>, i32 <cmp>, i32 <val> )
- declare i64 @llvm.atomic.cmp.swap.i64.p0i64( i64* <ptr>, i64 <cmp>, i64 <val> )
+ declare i8 @llvm.atomic.cmp.swap.i8.p0i8(i8* <ptr>, i8 <cmp>, i8 <val>)
+ declare i16 @llvm.atomic.cmp.swap.i16.p0i16(i16* <ptr>, i16 <cmp>, i16 <val>)
+ declare i32 @llvm.atomic.cmp.swap.i32.p0i32(i32* <ptr>, i32 <cmp>, i32 <val>)
+ declare i64 @llvm.atomic.cmp.swap.i64.p0i64(i64* <ptr>, i64 <cmp>, i64 <val>)
Overview:
@@ -6654,13 +7088,13 @@ LLVM.
store i32 4, %ptr
%val1 = add i32 4, 4
-%result1 = call i32 @llvm.atomic.cmp.swap.i32.p0i32( i32* %ptr, i32 4, %val1 )
+%result1 = call i32 @llvm.atomic.cmp.swap.i32.p0i32(i32* %ptr, i32 4, %val1)
; yields {i32}:result1 = 4
%stored1 = icmp eq i32 %result1, 4
; yields {i1}:stored1 = true
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 8
%val2 = add i32 1, 1
-%result2 = call i32 @llvm.atomic.cmp.swap.i32.p0i32( i32* %ptr, i32 5, %val2 )
+%result2 = call i32 @llvm.atomic.cmp.swap.i32.p0i32(i32* %ptr, i32 5, %val2)
; yields {i32}:result2 = 8
%stored2 = icmp eq i32 %result2, 5
; yields {i1}:stored2 = false
@@ -6680,10 +7114,10 @@ LLVM.
integer bit width. Not all targets support all bit widths however.
- declare i8 @llvm.atomic.swap.i8.p0i8( i8* <ptr>, i8 <val> )
- declare i16 @llvm.atomic.swap.i16.p0i16( i16* <ptr>, i16 <val> )
- declare i32 @llvm.atomic.swap.i32.p0i32( i32* <ptr>, i32 <val> )
- declare i64 @llvm.atomic.swap.i64.p0i64( i64* <ptr>, i64 <val> )
+ declare i8 @llvm.atomic.swap.i8.p0i8(i8* <ptr>, i8 <val>)
+ declare i16 @llvm.atomic.swap.i16.p0i16(i16* <ptr>, i16 <val>)
+ declare i32 @llvm.atomic.swap.i32.p0i32(i32* <ptr>, i32 <val>)
+ declare i64 @llvm.atomic.swap.i64.p0i64(i64* <ptr>, i64 <val>)
Overview:
@@ -6710,13 +7144,13 @@ LLVM.
store i32 4, %ptr
%val1 = add i32 4, 4
-%result1 = call i32 @llvm.atomic.swap.i32.p0i32( i32* %ptr, i32 %val1 )
+%result1 = call i32 @llvm.atomic.swap.i32.p0i32(i32* %ptr, i32 %val1)
; yields {i32}:result1 = 4
%stored1 = icmp eq i32 %result1, 4
; yields {i1}:stored1 = true
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 8
%val2 = add i32 1, 1
-%result2 = call i32 @llvm.atomic.swap.i32.p0i32( i32* %ptr, i32 %val2 )
+%result2 = call i32 @llvm.atomic.swap.i32.p0i32(i32* %ptr, i32 %val2)
; yields {i32}:result2 = 8
%stored2 = icmp eq i32 %result2, 8
; yields {i1}:stored2 = true
@@ -6738,10 +7172,10 @@ LLVM.
any integer bit width. Not all targets support all bit widths however.
- declare i8 @llvm.atomic.load.add.i8..p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.add.i16..p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.add.i32..p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.add.i64..p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.add.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.add.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.add.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.add.i64.p0i64(i64* <ptr>, i64 <delta>)
Overview:
@@ -6764,11 +7198,11 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 4, %ptr
-%result1 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 4 )
+%result1 = call i32 @llvm.atomic.load.add.i32.p0i32(i32* %ptr, i32 4)
; yields {i32}:result1 = 4
-%result2 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 2 )
+%result2 = call i32 @llvm.atomic.load.add.i32.p0i32(i32* %ptr, i32 2)
; yields {i32}:result2 = 8
-%result3 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 5 )
+%result3 = call i32 @llvm.atomic.load.add.i32.p0i32(i32* %ptr, i32 5)
; yields {i32}:result3 = 10
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 15
@@ -6789,10 +7223,10 @@ LLVM.
support all bit widths however.
- declare i8 @llvm.atomic.load.sub.i8.p0i32( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.sub.i16.p0i32( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.sub.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.sub.i64.p0i32( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.sub.i8.p0i32(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.sub.i16.p0i32(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.sub.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.sub.i64.p0i32(i64* <ptr>, i64 <delta>)
Overview:
@@ -6816,11 +7250,11 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 8, %ptr
-%result1 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 4 )
+%result1 = call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %ptr, i32 4)
; yields {i32}:result1 = 8
-%result2 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 2 )
+%result2 = call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %ptr, i32 2)
; yields {i32}:result2 = 4
-%result3 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 5 )
+%result3 = call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %ptr, i32 5)
; yields {i32}:result3 = 2
%memval1 = load i32* %ptr
; yields {i32}:memval1 = -3
@@ -6845,31 +7279,31 @@ LLVM.
widths however.
- declare i8 @llvm.atomic.load.and.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.and.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.and.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.and.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.and.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.and.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.and.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.and.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.or.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.or.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.or.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.or.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.or.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.or.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.or.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.or.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.nand.i8.p0i32( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.nand.i16.p0i32( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.nand.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.nand.i64.p0i32( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.nand.i8.p0i32(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.nand.i16.p0i32(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.nand.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.nand.i64.p0i32(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.xor.i8.p0i32( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.xor.i16.p0i32( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.xor.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.xor.i64.p0i32( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.xor.i8.p0i32(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.xor.i16.p0i32(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.xor.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.xor.i64.p0i32(i64* <ptr>, i64 <delta>)
Overview:
@@ -6894,13 +7328,13 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 0x0F0F, %ptr
-%result0 = call i32 @llvm.atomic.load.nand.i32.p0i32( i32* %ptr, i32 0xFF )
+%result0 = call i32 @llvm.atomic.load.nand.i32.p0i32(i32* %ptr, i32 0xFF)
; yields {i32}:result0 = 0x0F0F
-%result1 = call i32 @llvm.atomic.load.and.i32.p0i32( i32* %ptr, i32 0xFF )
+%result1 = call i32 @llvm.atomic.load.and.i32.p0i32(i32* %ptr, i32 0xFF)
; yields {i32}:result1 = 0xFFFFFFF0
-%result2 = call i32 @llvm.atomic.load.or.i32.p0i32( i32* %ptr, i32 0F )
+%result2 = call i32 @llvm.atomic.load.or.i32.p0i32(i32* %ptr, i32 0F)
; yields {i32}:result2 = 0xF0
-%result3 = call i32 @llvm.atomic.load.xor.i32.p0i32( i32* %ptr, i32 0F )
+%result3 = call i32 @llvm.atomic.load.xor.i32.p0i32(i32* %ptr, i32 0F)
; yields {i32}:result3 = FF
%memval1 = load i32* %ptr
; yields {i32}:memval1 = F0
@@ -6924,31 +7358,31 @@ LLVM.
address spaces. Not all targets support all bit widths however.
- declare i8 @llvm.atomic.load.max.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.max.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.max.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.max.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.max.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.max.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.max.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.max.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.min.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.min.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.min.i32..p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.min.i64..p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.min.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.min.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.min.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.min.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.umax.i8.p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.umax.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.umax.i32.p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.umax.i64.p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.umax.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.umax.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.umax.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.umax.i64.p0i64(i64* <ptr>, i64 <delta>)
- declare i8 @llvm.atomic.load.umin.i8..p0i8( i8* <ptr>, i8 <delta> )
- declare i16 @llvm.atomic.load.umin.i16.p0i16( i16* <ptr>, i16 <delta> )
- declare i32 @llvm.atomic.load.umin.i32..p0i32( i32* <ptr>, i32 <delta> )
- declare i64 @llvm.atomic.load.umin.i64..p0i64( i64* <ptr>, i64 <delta> )
+ declare i8 @llvm.atomic.load.umin.i8.p0i8(i8* <ptr>, i8 <delta>)
+ declare i16 @llvm.atomic.load.umin.i16.p0i16(i16* <ptr>, i16 <delta>)
+ declare i32 @llvm.atomic.load.umin.i32.p0i32(i32* <ptr>, i32 <delta>)
+ declare i64 @llvm.atomic.load.umin.i64.p0i64(i64* <ptr>, i64 <delta>)
Overview:
@@ -6973,13 +7407,13 @@ LLVM.
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 7, %ptr
-%result0 = call i32 @llvm.atomic.load.min.i32.p0i32( i32* %ptr, i32 -2 )
+%result0 = call i32 @llvm.atomic.load.min.i32.p0i32(i32* %ptr, i32 -2)
; yields {i32}:result0 = 7
-%result1 = call i32 @llvm.atomic.load.max.i32.p0i32( i32* %ptr, i32 8 )
+%result1 = call i32 @llvm.atomic.load.max.i32.p0i32(i32* %ptr, i32 8)
; yields {i32}:result1 = -2
-%result2 = call i32 @llvm.atomic.load.umin.i32.p0i32( i32* %ptr, i32 10 )
+%result2 = call i32 @llvm.atomic.load.umin.i32.p0i32(i32* %ptr, i32 10)
; yields {i32}:result2 = 8
-%result3 = call i32 @llvm.atomic.load.umax.i32.p0i32( i32* %ptr, i32 30 )
+%result3 = call i32 @llvm.atomic.load.umax.i32.p0i32(i32* %ptr, i32 30)
; yields {i32}:result3 = 8
%memval1 = load i32* %ptr
; yields {i32}:memval1 = 30
@@ -7134,7 +7568,7 @@ LLVM.
Syntax:
- declare void @llvm.var.annotation(i8* <val>, i8* <str>, i8* <str>, i32 <int> )
+ declare void @llvm.var.annotation(i8* <val>, i8* <str>, i8* <str>, i32 <int>)
Overview:
@@ -7165,11 +7599,11 @@ LLVM.
any integer bit width.
- declare i8 @llvm.annotation.i8(i8 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i16 @llvm.annotation.i16(i16 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i32 @llvm.annotation.i32(i32 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i64 @llvm.annotation.i64(i64 <val>, i8* <str>, i8* <str>, i32 <int> )
- declare i256 @llvm.annotation.i256(i256 <val>, i8* <str>, i8* <str>, i32 <int> )
+ declare i8 @llvm.annotation.i8(i8 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i16 @llvm.annotation.i16(i16 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i32 @llvm.annotation.i32(i32 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i64 @llvm.annotation.i64(i64 <val>, i8* <str>, i8* <str>, i32 <int>)
+ declare i256 @llvm.annotation.i256(i256 <val>, i8* <str>, i8* <str>, i32 <int>)
Overview:
@@ -7223,7 +7657,7 @@ LLVM.
Syntax:
- declare void @llvm.stackprotector( i8* <guard>, i8** <slot> )
+ declare void @llvm.stackprotector(i8* <guard>, i8** <slot>)
Overview:
@@ -7242,7 +7676,7 @@ LLVM.
the
AllocaInst stack slot to be before local variables on the
stack. This is to ensure that if a local variable on the stack is
overwritten, it will destroy the value of the guard. When the function exits,
- the guard on the stack is checked against the original guard. If they're
+ the guard on the stack is checked against the original guard. If they are
different, then the program aborts by calling the
__stack_chk_fail()
function.
@@ -7257,49 +7691,29 @@ LLVM.
Syntax:
- declare i32 @llvm.objectsize.i32( i8* <ptr>, i32 <type> )
- declare i64 @llvm.objectsize.i64( i8* <ptr>, i32 <type> )
+ declare i32 @llvm.objectsize.i32(i8* <object>, i1 <type>)
+ declare i64 @llvm.objectsize.i64(i8* <object>, i1 <type>)
Overview:
-
The llvm.objectsize intrinsic is designed to provide information
- to the optimizers to either discover at compile time either a) when an
- operation like memcpy will either overflow a buffer that corresponds to
- an object, or b) to determine that a runtime check for overflow isn't
- necessary. An object in this context means an allocation of a
- specific type.
+
The llvm.objectsize intrinsic is designed to provide information to
+ the optimizers to determine at compile time whether a) an operation (like
+ memcpy) will overflow a buffer that corresponds to an object, or b) that a
+ runtime check for overflow isn't necessary. An object in this context means
+ an allocation of a specific class, structure, array, or other object.
Arguments:
-
The llvm.objectsize intrinsic takes two arguments. The first
- argument is a pointer to the object ptr. The second argument
- is an integer type which ranges from 0 to 3. The first bit in
- the type corresponds to a return value based on whole objects,
- and the second bit whether or not we return the maximum or minimum
- remaining bytes computed.
-
-
- 00 |
- whole object, maximum number of bytes |
-
-
- 01 |
- partial object, maximum number of bytes |
-
-
- 10 |
- whole object, minimum number of bytes |
-
-
- 11 |
- partial object, minimum number of bytes |
-
-
-
+
The llvm.objectsize intrinsic takes two arguments. The first
+ argument is a pointer to or into the object. The second argument
+ is a boolean 0 or 1. This argument determines whether you want the
+ maximum (0) or minimum (1) bytes remaining. This needs to be a literal 0 or
+ 1, variables are not allowed.
+
Semantics:
The llvm.objectsize intrinsic is lowered to either a constant
- representing the size of the object concerned or i32/i64 -1 or 0
- (depending on the type argument if the size cannot be determined
- at compile time.
+ representing the size of the object concerned, or
i32/i64 -1 or 0,
+ depending on the
type argument, if the size cannot be determined at
+ compile time.