NAKAMURA Takumi [Mon, 5 Jan 2015 21:14:14 +0000 (21:14 +0000)]
[autoconf] Export LLVM_LIBDIR_SUFFIX with empty string in LLVMConfig.cmake. tools/llvm-config is also doing so.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225204
91177308-0d34-0410-b5e6-
96231b3b80d8
Hal Finkel [Mon, 5 Jan 2015 21:10:24 +0000 (21:10 +0000)]
[PowerPC] Fold i1 extensions with other ops
Consider this function from our README.txt file:
int foo(int a, int b) { return (a < b) << 4; }
We now explicitly track CR bits by default, so the comment in the README.txt
about not really having a SETCC is no longer accurate, but we did generate this
somewhat silly code:
cmpw 0, 3, 4
li 3, 0
li 12, 1
isel 3, 12, 3, 0
sldi 3, 3, 4
blr
which generates the zext as a select between 0 and 1, and then shifts the
result by a constant amount. Here we preprocess the DAG in order to fold the
results of operations on an extension of an i1 value into the SELECT_I[48]
pseudo instruction when the resulting constant can be materialized using one
instruction (just like the 0 and 1). This was not implemented as a DAGCombine
because the resulting code would have been anti-canonical and depends on
replacing chained user nodes, which does not fit well into the lowering
paradigm. Now we generate:
cmpw 0, 3, 4
li 3, 0
li 12, 16
isel 3, 12, 3, 0
blr
which is less silly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225203
91177308-0d34-0410-b5e6-
96231b3b80d8
Simon Pilgrim [Mon, 5 Jan 2015 21:09:48 +0000 (21:09 +0000)]
[X86][SSE] Fixed description for isSequentialOrUndefInRange. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225202
91177308-0d34-0410-b5e6-
96231b3b80d8
Colin LeMahieu [Mon, 5 Jan 2015 20:56:41 +0000 (20:56 +0000)]
[Hexagon] Adding rounding reg/reg variants, accumulating multiplies, and accumulating shifts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225201
91177308-0d34-0410-b5e6-
96231b3b80d8
Duncan P. N. Exon Smith [Mon, 5 Jan 2015 20:41:25 +0000 (20:41 +0000)]
IR: Prune arguments to ValueAsMetadata::ValueAsMetadata()
`LLVMContext` isn't actually used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225200
91177308-0d34-0410-b5e6-
96231b3b80d8
Colin LeMahieu [Mon, 5 Jan 2015 20:35:54 +0000 (20:35 +0000)]
[Hexagon] Adding V4 bit manipulating instructions, removing ALU defs without encoding bits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225199
91177308-0d34-0410-b5e6-
96231b3b80d8
Colin LeMahieu [Mon, 5 Jan 2015 20:14:58 +0000 (20:14 +0000)]
[Hexagon] Adding V4 logic-logic instructions and tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225198
91177308-0d34-0410-b5e6-
96231b3b80d8
Colin LeMahieu [Mon, 5 Jan 2015 20:04:40 +0000 (20:04 +0000)]
[Hexagon] Adding orand, bitsplit reg/reg, and modwrap instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225197
91177308-0d34-0410-b5e6-
96231b3b80d8
Hal Finkel [Mon, 5 Jan 2015 18:52:29 +0000 (18:52 +0000)]
[PowerPC] Remove zexts after i32 ctlz
The 64-bit semantics of cntlzw are not special, the 32-bit population count is
stored as a 64-bit value in the range [0,32]. As a result, it is always zero
extended, and it can be added to the PPCISelDAGToDAG peephole optimization as a
frontier instruction for the removal of unnecessary zero extensions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225192
91177308-0d34-0410-b5e6-
96231b3b80d8
Hal Finkel [Mon, 5 Jan 2015 18:09:06 +0000 (18:09 +0000)]
[PowerPC] Remove zexts after byte-swapping loads
lhbrx and lwbrx not only load their data with byte swapping, but also clear the
upper 32 bits (at least). As a result, they can be added to the PPCISelDAGToDAG
peephole optimization as frontier instructions for the removal of unnecessary
zero extensions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225189
91177308-0d34-0410-b5e6-
96231b3b80d8
Colin LeMahieu [Mon, 5 Jan 2015 18:08:21 +0000 (18:08 +0000)]
[Hexagon] Adding round reg/imm and bitsplit instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225188
91177308-0d34-0410-b5e6-
96231b3b80d8
Saleem Abdulrasool [Mon, 5 Jan 2015 17:56:32 +0000 (17:56 +0000)]
SymbolRewriter: use iplist::splice
The swap implementation for iplist is currently unsupported. Simply splice the
old list into place, which achieves the same purpose. This is needed in order
to thread the -frewrite-map-file frontend option correctly. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225186
91177308-0d34-0410-b5e6-
96231b3b80d8
Saleem Abdulrasool [Mon, 5 Jan 2015 17:56:29 +0000 (17:56 +0000)]
SymbolRewriter: 80-column
Wrap a couple of lines. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225185
91177308-0d34-0410-b5e6-
96231b3b80d8
Ahmed Bougacha [Mon, 5 Jan 2015 17:10:26 +0000 (17:10 +0000)]
[AArch64] Improve codegen of store lane instructions by avoiding GPR usage.
We used to generate code similar to:
umov.b w8, v0[2]
strb w8, [x0, x1]
because the STR*ro* patterns were preferred to ST1*.
Instead, we can avoid going through GPRs, and generate:
add x8, x0, x1
st1.b { v0 }[2], [x8]
This patch increases the ST1* AddedComplexity to achieve that.
rdar://
16372710
Differential Revision: http://reviews.llvm.org/D6202
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225183
91177308-0d34-0410-b5e6-
96231b3b80d8
Ahmed Bougacha [Mon, 5 Jan 2015 17:02:28 +0000 (17:02 +0000)]
[AArch64] Improve codegen of store lane 0 instructions by directly storing the subregister.
For 0-lane stores, we used to generate code similar to:
fmov w8, s0
str w8, [x0, x1, lsl #2]
instead of:
str s0, [x0, x1, lsl #2]
To correct that: for store lane 0 patterns, directly match to STR <subreg>0.
Byte-sized instructions don't have the special case for a 0 index,
because FPR8s are defined to have untyped content.
rdar://
16372710
Differential Revision: http://reviews.llvm.org/D6772
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225181
91177308-0d34-0410-b5e6-
96231b3b80d8
NAKAMURA Takumi [Mon, 5 Jan 2015 14:18:04 +0000 (14:18 +0000)]
llvm/test/lit.cfg: have_ld_plugin_support(): Use decode() for stdout.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225171
91177308-0d34-0410-b5e6-
96231b3b80d8
Karthik Bhat [Mon, 5 Jan 2015 13:57:59 +0000 (13:57 +0000)]
Select lower fsub,fabs pattern to fabd on AArch64
This patch lowers patterns such as-
fsub v0.4s, v0.4s, v1.4s
fabs v0.4s, v0.4s
to
fabd v0.4s, v0.4s, v1.4s
on AArch64.
Review: http://reviews.llvm.org/D6791
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225169
91177308-0d34-0410-b5e6-
96231b3b80d8
Charlie Turner [Mon, 5 Jan 2015 13:26:37 +0000 (13:26 +0000)]
Parse Tag_compatibility correctly.
Tag_compatibility takes two arguments, but before this patch it would
erroneously accept just one, it now produces an error in that case.
Change-Id: I530f918587620d0d5dfebf639944d6083871ef7d
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225167
91177308-0d34-0410-b5e6-
96231b3b80d8
Charlie Turner [Mon, 5 Jan 2015 13:12:17 +0000 (13:12 +0000)]
Emit the build attribute Tag_conformance.
Claim conformance to version 2.09 of the ARM ABI.
This build attribute must be emitted first amongst the build attributes when
written to an object file. This is to simplify conformance detection by
consumers.
Change-Id: If9eddcfc416bc9ad6e5cc8cdcb05d0031af7657e
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225166
91177308-0d34-0410-b5e6-
96231b3b80d8
Karthik Bhat [Mon, 5 Jan 2015 13:11:07 +0000 (13:11 +0000)]
Select lower sub,abs pattern to sabd on AArch64
This patch lowers patterns such as-
sub v0.4s, v0.4s, v1.4s
abs v0.4s, v0.4s
to
sabd v0.4s, v0.4s, v1.4s
on AArch64.
Review: http://reviews.llvm.org/D6781
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225165
91177308-0d34-0410-b5e6-
96231b3b80d8
Michael Kuperstein [Mon, 5 Jan 2015 12:34:01 +0000 (12:34 +0000)]
Fix broken test from r225159.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225164
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Mon, 5 Jan 2015 12:32:11 +0000 (12:32 +0000)]
[PM] Don't run the machinery of invalidating all the analysis passes
when all are being preserved.
We want to short-circuit this for a couple of reasons. One, I don't
really want passes to grow a dependency on actually receiving their
invalidate call when they've been preserved. I'm thinking about removing
this entirely. But more importantly, preserving everything is likely to
be the common case in a lot of scenarios, and it would be really good to
bypass all of the invalidation and preservation machinery there.
Avoiding calling N opaque functions to try to invalidate things that are
by definition still valid seems important. =]
This wasn't really inpsired by much other than seeing the spam in the
logging for analyses, but it seems better ot get it checked in rather
than forgetting about it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225163
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Mon, 5 Jan 2015 12:21:44 +0000 (12:21 +0000)]
[PM] Add names and debug logging for analysis passes to the new pass
manager.
This starts to allow us to test analyses more easily, but it's really
only the beginning. Some of the code here is still untestable without
manual changes to create analysis passes, but I wanted to factor it into
a small of chunks as possible.
Next up in order to be able to test things are, in no particular order:
- No-op analyses passes so we don't have to use real ones to exercise
the pass maneger itself.
- Automatic way of generating dummy passes that require an analysis be
run, including a variant that calls a 'print' method on a pass to make
it even easier to print out the results of an analysis.
- Dummy passes that invalidate all analyses for their IR unit so we can
test invalidation and re-runs.
- Automatic way to print each analysis pass as it is re-run.
- Automatic but optional verification of analysis passes everywhere
possible.
I'm not claiming I'll get to all of these immediately, but that's what
is in the pipeline at some stage. I'm fleshing out exactly what I need
and what to prioritize by working on converting analyses and then trying
to test the conversion. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225162
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Mon, 5 Jan 2015 10:15:49 +0000 (10:15 +0000)]
Replace several 'assert(false' with 'llvm_unreachable' or fold a condition into the assert.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225160
91177308-0d34-0410-b5e6-
96231b3b80d8
Jiangning Liu [Mon, 5 Jan 2015 10:08:58 +0000 (10:08 +0000)]
Fixed a bug in memory dependence checking module of loop vectorization. The following loop should not be vectorized with current algorithm.
{code}
// loop body
... = a[i] (1)
... = a[i+1] (2)
.......
a[i+1] = .... (3)
a[i] = ... (4)
{code}
The algorithm tries to collect memory access candidates from AliasSetTracker, and then check memory dependences one another. The memory accesses are unique in AliasSetTracker, and a single memory access in AliasSetTracker may map to multiple entries in AccessAnalysis, which could cover both 'read' and 'write'. Originally the algorithm only checked 'write' entry in Accesses if only 'write' exists. This is incorrect and the consequence is it ignored all read access, and finally some RAW and WAR dependence are missed.
For the case given above, if we ignore two reads, the dependence between (1) and (3) would not be able to be captured, and finally this loop will be incorrectly vectorized.
The fix simply inserts a new loop to find all entries in Accesses. Since it will skip most of all other memory accesses by checking the Value pointer at the very beginning of the loop, it should not increase compile-time visibly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225159
91177308-0d34-0410-b5e6-
96231b3b80d8
Michael Gottesman [Mon, 5 Jan 2015 08:55:19 +0000 (08:55 +0000)]
Convert SmallMapVector from a class to a struct.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225158
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Mon, 5 Jan 2015 08:19:12 +0000 (08:19 +0000)]
[X86] Remove the predicates from the register forms of the 2-byte inc and dec instructions. Remove the 32-bit mode only versions that existed for the disassembler. Move the patterns out of the instructions so they can still be qualified with predicates.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225157
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Mon, 5 Jan 2015 08:19:10 +0000 (08:19 +0000)]
[X86] Simplify code a little by just summing flags instead of conditionally incrementing. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225156
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Mon, 5 Jan 2015 08:19:07 +0000 (08:19 +0000)]
[X86] Remove unnecessary redeclaration of a variable with the same assignment as the beginning of the function. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225155
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Mon, 5 Jan 2015 08:19:05 +0000 (08:19 +0000)]
[X86] Remove a strange fixme referring to a hack that doesn't seem to exist since the code is in a comment. Can't figure out what the body of the 'if' was supposed to be anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225154
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Mon, 5 Jan 2015 08:19:03 +0000 (08:19 +0000)]
[x86] Reduce text duplication for similar operand class declarations in tablegen instruction info. No functional change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225153
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Mon, 5 Jan 2015 08:18:59 +0000 (08:18 +0000)]
[X86] Fix the immediate size to match the address size in the operand types for the move to/from absolute memory instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225152
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Mon, 5 Jan 2015 08:18:52 +0000 (08:18 +0000)]
[X86] Remove unused operand type from disassembler handling. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225151
91177308-0d34-0410-b5e6-
96231b3b80d8
Hal Finkel [Mon, 5 Jan 2015 05:24:42 +0000 (05:24 +0000)]
[PowerPC] Enable speculation of cttz/ctlz
PPC has an instruction for ctlz with defined zero behavior, and our lowering of
cttz (provided by DAGCombine) is also efficient and branchless, so speculating
these makes sense.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225150
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Mon, 5 Jan 2015 04:17:53 +0000 (04:17 +0000)]
[SROA] Apply a somewhat heavy and unpleasant hammer to fix PR22093, an
assert out of the new pre-splitting in SROA.
This fix makes the code do what was originally intended -- when we have
a store of a load both dealing in the same alloca, we force them to both
be pre-split with identical offsets. This is really quite hard to do
because we can keep discovering problems as we go along. We have to
track every load over the current alloca which for any resaon becomes
invalid for pre-splitting, and go back to remove all stores of those
loads. I've included a couple of test cases derived from PR22093 that
cover the different ways this can happen. While that PR only really
triggered the first of these two, its the same fundamental issue.
The other challenge here is documented in a FIXME now. We end up being
quite a bit more aggressive for pre-splitting when loads and stores
don't refer to the same alloca. This aggressiveness comes at the cost of
introducing potentially redundant loads. It isn't clear that this is the
right balance. It might be considerably better to require that we only
do pre-splitting when we can presplit every load and store involved in
the entire operation. That would give more consistent if conservative
results. Unfortunately, it requires a non-trivial change to the actual
pre-splitting operation in order to correctly handle cases where we end
up pre-splitting stores out-of-order. And it isn't 100% clear that this
is the right direction, although I'm starting to suspect that it is.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225149
91177308-0d34-0410-b5e6-
96231b3b80d8
Hal Finkel [Mon, 5 Jan 2015 04:05:21 +0000 (04:05 +0000)]
[LangRef] Correct a typo
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225148
91177308-0d34-0410-b5e6-
96231b3b80d8
Hal Finkel [Mon, 5 Jan 2015 03:41:38 +0000 (03:41 +0000)]
[PowerPC] Materialize i64 constants using rotation with masking
r225135 added the ability to materialize i64 constants using rotations in order
to reduce the instruction count. Sometimes we can use a rotation only with some
extra masking, so that we take advantage of the fact that generating a bunch of
extra higher-order 1 bits is easy using li/lis.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225147
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Mon, 5 Jan 2015 03:03:31 +0000 (03:03 +0000)]
[PM] Cleanup a place where I forgot to update the header guards when
renaming a file from AssumptionTracker.h to AssumptionCache.h.
Thanks to Philip Reames for noticing and pointing it out in code review!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225146
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Mon, 5 Jan 2015 02:47:05 +0000 (02:47 +0000)]
[PM] Switch the new pass manager to use a reference-based API for IR
units.
This was debated back and forth a bunch, but using references is now
clearly cleaner. Of all the code written using pointers thus far, in
only one place did it really make more sense to have a pointer. In most
cases, this just removes immediate dereferencing from the code. I think
it is much better to get errors on null IR units earlier, potentially
at compile time, than to delay it.
Most notably, the legacy pass manager uses references for its routines
and so as more and more code works with both, the use of pointers was
likely to become really annoying. I noticed this when I ported the
domtree analysis over and wrote the entire thing with references only to
have it fail to compile. =/ It seemed better to switch now than to
delay. We can, of course, revisit this is we learn that references are
really problematic in the API.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225145
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Mon, 5 Jan 2015 00:08:53 +0000 (00:08 +0000)]
[PM] Wire up support for explicitly running the verifier pass.
The required functionality has been there for some time, but I never
managed to actually wire it into the command line registry of passes.
Let's do that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225144
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Sun, 4 Jan 2015 23:13:57 +0000 (23:13 +0000)]
[PM] Cleanup a const_cast and other machinery left over in this code
from before I removed thet non-const use of the function.
The unused variable that held the const_cast was already kindly removed
by Michael.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225143
91177308-0d34-0410-b5e6-
96231b3b80d8
Simon Pilgrim [Sun, 4 Jan 2015 19:08:03 +0000 (19:08 +0000)]
[X86][SSE] Added vector packing test for pr12412
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225138
91177308-0d34-0410-b5e6-
96231b3b80d8
Simon Pilgrim [Sun, 4 Jan 2015 17:52:00 +0000 (17:52 +0000)]
[X86][SSE] Added vector integer truncation tests - based off pr15524
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225137
91177308-0d34-0410-b5e6-
96231b3b80d8
Hal Finkel [Sun, 4 Jan 2015 15:43:55 +0000 (15:43 +0000)]
[PowerPC] Materialize i64 constants using rotation
Materializing full 64-bit constants on PPC64 can be expensive, requiring up to
5 instructions depending on the locations of the non-zero bits. Sometimes
materializing a rotated constant, and then applying the inverse rotation, requires
fewer instructions than the direct method. If so, do that instead.
In r225132, I added support for forming constants using bit inversion. In
effect, this reverts that commit and replaces it with rotation support. The bit
inversion is useful for turning constants that are mostly ones into ones that
are mostly zeros (thus enabling a more-efficient shift-based materialization),
but the same effect can be obtained by using negative constants and a rotate,
and that is at least as efficient, if not more.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225135
91177308-0d34-0410-b5e6-
96231b3b80d8
Michael Kuperstein [Sun, 4 Jan 2015 13:35:44 +0000 (13:35 +0000)]
Fix unused variable warning for non-asserts builds. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225133
91177308-0d34-0410-b5e6-
96231b3b80d8
Hal Finkel [Sun, 4 Jan 2015 12:35:03 +0000 (12:35 +0000)]
[PowerPC] Materialize i64 constants using bit inversion
Materializing full 64-bit constants on PPC64 can be expensive, requiring up to
5 instructions depending on the locations of the non-zero bits. Sometimes
materializing the bit-reversed constant, and then flipping the bits, requires
fewer instructions than the direct method. If so, do that instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225132
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Sun, 4 Jan 2015 12:03:27 +0000 (12:03 +0000)]
[PM] Split the AssumptionTracker immutable pass into two separate APIs:
a cache of assumptions for a single function, and an immutable pass that
manages those caches.
The motivation for this change is two fold. Immutable analyses are
really hacks around the current pass manager design and don't exist in
the new design. This is usually OK, but it requires that the core logic
of an immutable pass be reasonably partitioned off from the pass logic.
This change does precisely that. As a consequence it also paves the way
for the *many* utility functions that deal in the assumptions to live in
both pass manager worlds by creating an separate non-pass object with
its own independent API that they all rely on. Now, the only bits of the
system that deal with the actual pass mechanics are those that actually
need to deal with the pass mechanics.
Once this separation is made, several simplifications become pretty
obvious in the assumption cache itself. Rather than using a set and
callback value handles, it can just be a vector of weak value handles.
The callers can easily skip the handles that are null, and eventually we
can wrap all of this up behind a filter iterator.
For now, this adds boiler plate to the various passes, but this kind of
boiler plate will end up making it possible to port these passes to the
new pass manager, and so it will end up factored away pretty reasonably.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225131
91177308-0d34-0410-b5e6-
96231b3b80d8
David Majnemer [Sun, 4 Jan 2015 07:36:02 +0000 (07:36 +0000)]
InstCombine: match can find ConstantExprs, don't assume we have a Value
We assumed the output of a match was a Value, this would cause us to
assert because we would fail a cast<>. Instead, use a helper in the
Operator family to hide the distinction between Value and Constant.
This fixes PR22087.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225127
91177308-0d34-0410-b5e6-
96231b3b80d8
David Majnemer [Sun, 4 Jan 2015 07:06:53 +0000 (07:06 +0000)]
ValueTracking: ComputeNumSignBits should tolerate misshapen phi nodes
PHI nodes can have zero operands in the middle of a transform. It is
expected that utilities in Analysis don't freak out when this happens.
Note that it is considered invalid to allow these misshapen phi nodes to
make it to another pass.
This fixes PR22086.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225126
91177308-0d34-0410-b5e6-
96231b3b80d8
Lang Hames [Sun, 4 Jan 2015 01:20:55 +0000 (01:20 +0000)]
[APFloat][ADT] Fix sign handling logic for FMA results that truncate to zero.
This patch adds a check for underflow when truncating results back to lower
precision at the end of an FMA. The additional sign handling logic in
APFloat::fusedMultiplyAdd should only be performed when the result of the
addition step of the FMA (in full precision) is exactly zero, not when the
result underflows to zero.
Unit tests for this case and related signed zero FMA results are included.
Fixes <rdar://problem/
18925551>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225123
91177308-0d34-0410-b5e6-
96231b3b80d8
Saleem Abdulrasool [Sat, 3 Jan 2015 21:35:09 +0000 (21:35 +0000)]
llvm-readobj: add support to dump COFF export tables
This enhances llvm-readobj to print out the COFF export table, similar to the
-coff-import option. This is useful for testing in lld.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225120
91177308-0d34-0410-b5e6-
96231b3b80d8
Saleem Abdulrasool [Sat, 3 Jan 2015 21:35:00 +0000 (21:35 +0000)]
ARM: permit tail calls to weak externals on COFF
Weak externals are resolved statically, so we can actually generate the tail
call on PE/COFF targets without breaking the requirements. It is questionable
whether we want to propagate the current behaviour for MachO as the requirements
are part of the ARM ELF specifications, and it seems that prior to the SVN
r215890, we would have tail'ed the call. For now, be conservative and only
permit it on PE/COFF where the call will always be fully resolved.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225119
91177308-0d34-0410-b5e6-
96231b3b80d8
Hal Finkel [Sat, 3 Jan 2015 17:58:24 +0000 (17:58 +0000)]
[PowerPC/BlockPlacement] Allow target to provide a per-loop alignment preference
The existing code provided for specifying a global loop alignment preference.
However, the preferred loop alignment might depend on the loop itself. For
recent POWER cores, loops between 5 and 8 instructions should have 32-byte
alignment (while the others are better with 16-byte alignment) so that the
entire loop will fit in one i-cache line.
To support this, getPrefLoopAlignment has been made virtual, and can be
provided with an optional MachineLoop* so the target can inspect the loop
before answering the query. The default behavior, as before, is to return the
value set with setPrefLoopAlignment. MachineBlockPlacement now queries the
target for each loop instead of only once per function. There should be no
functional change for other targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225117
91177308-0d34-0410-b5e6-
96231b3b80d8
Hal Finkel [Sat, 3 Jan 2015 14:58:25 +0000 (14:58 +0000)]
[PowerPC] Use 16-byte alignment for modern cores for functions/loops
Most modern PowerPC cores prefer that functions and loops start on
16-byte-aligned boundaries (*), so instruct block placement, etc. to make this
happen. The branch selector has also been adjusted so account for the extra
nops that might now be inserted before loop headers.
(*) Some cores actually prefer other alignments for small loops, but that will
be addressed in a follow-up commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225115
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Sat, 3 Jan 2015 08:16:34 +0000 (08:16 +0000)]
Minor cleanup to all the switches after MatchInstructionImpl in all the AsmParsers.
Make sure they all have llvm_unreachable on the default path out of the switch. Remove unnecessary "default: break". Remove a 'return' after unreachable. Fix some indentation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225114
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Sat, 3 Jan 2015 08:16:29 +0000 (08:16 +0000)]
Fix some formatting in tablegen output.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225113
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Sat, 3 Jan 2015 08:16:14 +0000 (08:16 +0000)]
Replace some 'unreachable' comments with llvm_unreachable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225112
91177308-0d34-0410-b5e6-
96231b3b80d8
David Majnemer [Sat, 3 Jan 2015 02:33:25 +0000 (02:33 +0000)]
ValueTracking: Make computeKnownBits for Arguments a little more clear
We would sometimes leave the out-param APInts untouched while going
through computeKnownBits. While I don't know of a way to trigger a bug
involving this in practice, it goes against the overall design of
computeKnownBits.
Found via code inspection.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225109
91177308-0d34-0410-b5e6-
96231b3b80d8
Hal Finkel [Sat, 3 Jan 2015 01:16:37 +0000 (01:16 +0000)]
[PowerPC] Add support for the CMPB instruction
Newer POWER cores, and the A2, support the cmpb instruction. This instruction
compares its operands, treating each of the 8 bytes in the GPRs separately,
returning a 'mask' result of 0 (for false) or -1 (for true) in each byte.
Code generation support is added, in the form of a PPCISelDAGToDAG
DAG-preprocessing routine, that recognizes patterns close to what the
instruction computes (either exactly, or related by a constant masking
operation), and generates the cmpb instruction (along with any necessary
constant masking operation). This can be expanded if use cases arise.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225106
91177308-0d34-0410-b5e6-
96231b3b80d8
Kostya Serebryany [Sat, 3 Jan 2015 00:54:43 +0000 (00:54 +0000)]
[asan] simplify the tracing code, make it use the same guard variables as coverage
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225103
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Sat, 3 Jan 2015 00:00:20 +0000 (00:00 +0000)]
[X86] Disassembler support for move to/from %rax with a 32-bit memory offset is REX.W and AdSize prefix are both present.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225099
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Sat, 3 Jan 2015 00:00:14 +0000 (00:00 +0000)]
[X86] Use 32-bit sign extended immediate for 64-bit LOCK_ArithBinOp with sign extended immediate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225098
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Fri, 2 Jan 2015 23:34:39 +0000 (23:34 +0000)]
[PM] Add proper documentation for the ModulePassManager and
FunctionPassManager. These never got documented, likely due to the
clutter of this header file. This fixes another problem people noticed
when they started trying to use the new pass manager.
I've also used this to document the aspirational constraints I would
like to hold passes to. I don't really have a better place to document
such things at this point, but eventually will probably create a proper
.rst file and page for the LLVM pass infrastructure that carries such
high-level concerns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225097
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Fri, 2 Jan 2015 23:25:16 +0000 (23:25 +0000)]
[PM] Actually include the correct file name. Sorry for the breakage.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225096
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Fri, 2 Jan 2015 23:16:59 +0000 (23:16 +0000)]
[PM] Lift the majority of the template boilerplate used to implement the
concept-based polymorphism in the pass manager to a separate header.
I got feedback from someone reading the code and trying to use it that
this was really making it hard to dive in and start using these APIs and
that makes a lot of sense.
This only requires a moderate amount of gymnastics to separate in this
way, namely rinsing the PreservedAnalysis object through a template
argument in a few places so that it is dependent and we only examine it
on instantiation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225094
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Fri, 2 Jan 2015 22:51:44 +0000 (22:51 +0000)]
[PM] Fix some formatting where clang-format has improved recently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225092
91177308-0d34-0410-b5e6-
96231b3b80d8
Philip Reames [Fri, 2 Jan 2015 19:46:49 +0000 (19:46 +0000)]
Reformat statepoint documentation and fix a couple of typos
Patch by Ramkumar Ramachandra <artagnon@gmail.com>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225084
91177308-0d34-0410-b5e6-
96231b3b80d8
Andrea Di Biagio [Fri, 2 Jan 2015 10:47:46 +0000 (10:47 +0000)]
Improved comments. No functional change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225080
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Fri, 2 Jan 2015 07:36:23 +0000 (07:36 +0000)]
[X86] Bring some better consistency to the naming of the move to/from %al/ax/eax/rax with memory offset.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225078
91177308-0d34-0410-b5e6-
96231b3b80d8
David Majnemer [Fri, 2 Jan 2015 07:29:47 +0000 (07:29 +0000)]
InstCombine: Detect when llvm.umul.with.overflow always overflows
We know overflow always occurs if both ~LHSKnownZero * ~RHSKnownZero
and LHSKnownOne * RHSKnownOne overflow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225077
91177308-0d34-0410-b5e6-
96231b3b80d8
David Majnemer [Fri, 2 Jan 2015 07:29:43 +0000 (07:29 +0000)]
Analysis: Reformulate WillNotOverflowUnsignedMul for reusability
WillNotOverflowUnsignedMul's smarts will live in ValueTracking as
computeOverflowForUnsignedMul. It now returns a tri-state result:
never overflows, always overflows and sometimes overflows.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225076
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Fri, 2 Jan 2015 07:02:25 +0000 (07:02 +0000)]
[X86] Make the instructions that use AdSize16/32/64 co-exist together without using mode predicates.
This is necessary to allow the disassembler to be able to handle AdSize32 instructions in 64-bit mode when address size prefix is used.
Eventually we should probably also support 'addr32' and 'addr16' in the assembler to override the address size on some of these instructions. But for now we'll just use special operand types that will lookup the current mode size to select the right instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225075
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Fri, 2 Jan 2015 03:55:54 +0000 (03:55 +0000)]
[SROA] Teach SROA to be more aggressive in splitting now that we have
a pre-splitting pass over loads and stores.
Historically, splitting could cause enough problems that I hamstrung the
entire process with a requirement that splittable integer loads and
stores must cover the entire alloca. All smaller loads and stores were
unsplittable to prevent chaos from ensuing. With the new pre-splitting
logic that does load/store pair splitting I introduced in r225061, we
can now very nicely handle arbitrarily splittable loads and stores. In
order to fully benefit from these smarts, we need to mark all of the
integer loads and stores as splittable.
However, we don't actually want to rewrite partitions with all integer
loads and stores marked as splittable. This will fail to extract scalar
integers from aggregates, which is kind of the point of SROA. =] In
order to resolve this, what we really want to do is only do
pre-splitting on the alloca slices with integer loads and stores fully
splittable. This allows us to uncover all non-integer uses of the alloca
that would benefit from a split in an integer load or store (and where
introducing the split is safe because it is just memory transfer from
a load to a store). Once done, we make all the non-whole-alloca integer
loads and stores unsplittable just as they have historically been,
repartition and rewrite.
The result is that when there are integer loads and stores anywhere
within an alloca (such as from a memcpy of a sub-object of a larger
object), we can split them up if there are non-integer components to the
aggregate hiding beneath. I've added the challenging test cases to
demonstrate how this is able to promote to scalars even a case where we
have even *partially* overlapping loads and stores.
This restores the single-store behavior for small arrays of i8s which is
really nice. I've restored both the little endian testing and big endian
testing for these exactly as they were prior to r225061. It also forced
me to be more aggressive in an alignment test to actually defeat SROA.
=] Without the added volatiles there, we actually split up the weird i16
loads and produce nice double allocas with better alignment.
This also uncovered a number of bugs where we failed to handle
splittable load and store slices which didn't have a begininng offset of
zero. Those fixes are included, and without them the existing test cases
explode in glorious fireworks. =]
I've kept support for leaving whole-alloca integer loads and stores as
splittable even for the purpose of rewriting, but I think that's likely
no longer needed. With the new pre-splitting, we might be able to remove
all the splitting support for loads and stores from the rewriter. Not
doing that in this patch to try to isolate any performance regressions
that causes in an easy to find and revert chunk.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225074
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Fri, 2 Jan 2015 02:47:38 +0000 (02:47 +0000)]
[SROA] Make the computation of adjusted pointers not leak GEP
instructions.
I noticed this when working on dialing up how aggressively we can
pre-split loads and stores. My test case wasn't passing because dead
GEPs into the allocas persisted when they were built by this routine.
This isn't terribly harmful, we still rewrote and promoted the alloca
and I can't conceive of how to cause this to happen in a case where we
will keep the exact same alloca but rewrite and promote the uses of it.
If that ever happened, we'd get an assert out of mem2reg.
So I don't have a direct test case yet, but the subsequent commit's test
case wouldn't pass without this. There are other problems fixed by this
patch that I spotted purely by inspection such as the fact that
getAdjustedPtr could have actually deleted dead base pointers. I don't
know how to get a base pointer to go into getAdjustedPtr today, so
I think this bug could never have manifested (and I certainly can't
write a test case for it) but, it wasn't the intent of the code. The
code really just wanted to GC the new instructions built. That can be
done more directly by comparing with the base pointer which is the only
non-new instruction that this code can return.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225073
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Fri, 2 Jan 2015 00:34:29 +0000 (00:34 +0000)]
[SROA] Add a test case for r225068 / PR22080.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225070
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Fri, 2 Jan 2015 00:10:22 +0000 (00:10 +0000)]
[SROA] Fix the loop exit placement to be prior to indexing the splits
array. This prevents it from walking out of bounds on the splits array.
Bug found with the existing tests by ASan and by the MSVC debug build.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225069
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Thu, 1 Jan 2015 23:26:16 +0000 (23:26 +0000)]
[SROA] Fix two total think-os in r225061 that should have been caught on
a +asserts bootstrap, but my bootstrap had asserts off. Oops.
Anyways, in some places it is reasonable to cast (as a sanity check) the
pointer operand to a load or store to an instruction within SROA --
namely when the pointer operand is expected to be derived from an
alloca, and thus always an instruction. However, the pre-splitting code
also deals with loads and stores to non-alloca pointers and there we
need to just use the Value*. Nothing about the code relied on the
instruction cast, it was only there essentially as an invariant
assertion. Remove the two that don't actually hold.
This should fix the proximate issue in PR22080, but I'm also doing an
asserts bootstrap myself to see if there are other issues lurking.
I'll craft a reduced test case in a moment, but I wanted to get the tree
healthy as quickly as possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225068
91177308-0d34-0410-b5e6-
96231b3b80d8
Hal Finkel [Thu, 1 Jan 2015 19:33:59 +0000 (19:33 +0000)]
[PowerPC] use UINT64_C instead of ul
Attempting to fix PR22078 (building on 32-bit systems) by replacing my careless
use of 1ul to be a uint64_t constant with UINT64_C(1).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225066
91177308-0d34-0410-b5e6-
96231b3b80d8
Michael Gottesman [Thu, 1 Jan 2015 13:54:05 +0000 (13:54 +0000)]
Revert "Just use a using directive in SmallMapVector instead of inheriting from MapVector itself."
This reverts commit r225059. I think MSVC 2012 has a problem with this. This is
an attempt to fix one of the MSVC 2012 bots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225065
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Thu, 1 Jan 2015 13:01:25 +0000 (13:01 +0000)]
Revert r225053: Add an ArrayRef upcasting constructor from ArrayRef<U*> -> ArrayRef<T*> where T is a base of U.
This appears to have broken at least the windows build bots due to
compile errors in the predicate that didn't simply supress the overload.
I'm not sure what the fix is, and the bots have been broken for a long
time now so I'm just reverting until Michael can figure out a fix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225064
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Thu, 1 Jan 2015 12:56:47 +0000 (12:56 +0000)]
[SROA] Switch to using a more direct debug logging technique in one part
of my new load and store splitting, and fix a bug where it logged
a totally irrelevant slice rather than the actual slice in question.
The logging here previously worked because we used to place new slices
onto the back of the core sequence, but that caused other problems.
I updated the actual code to store new slices in their own vector but
didn't update the logging. There isn't a good way to reuse the logging
any more, and frankly it wasn't needed. We can directly log this bit
more easily.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225063
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Thu, 1 Jan 2015 12:01:03 +0000 (12:01 +0000)]
[SROA] Fix formatting with clang-format which I managed to fail to do
prior to committing r225061. Sorry for that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225062
91177308-0d34-0410-b5e6-
96231b3b80d8
Chandler Carruth [Thu, 1 Jan 2015 11:54:38 +0000 (11:54 +0000)]
[SROA] Teach SROA how to much more intelligently handle split loads and
stores.
When there are accesses to an entire alloca with an integer
load or store as well as accesses to small pieces of the alloca, SROA
splits up the large integer accesses. In order to do that, it uses bit
math to merge the small accesses into large integers. While this is
effective, it produces insane IR that can cause significant problems in
the rest of the optimizer:
- It can cause load and store mismatches with GVN on the non-alloca side
where we end up loading an i64 (or some such) rather than loading
specific elements that are stored.
- We can't always get rid of the integer bit math, which is why we can't
always fix the loads and stores to work well with GVN.
- This is especially bad when we have operations that mix poorly with
integer bit math such as floating point operations.
- It will block things like the vectorizer which might be able to handle
the scalar stores that underly the aggregate.
At the same time, we can't just directly split up these loads and stores
in all cases. If there is actual integer arithmetic involved on the
values, then using integer bit math is actually the perfect lowering
because we can often combine it heavily with the surrounding math.
The solution this patch provides is to find places where SROA is
partitioning aggregates into small elements, and look for splittable
loads and stores that it can split all the way to some other adjacent
load and store. These are uniformly the cases where failing to split the
loads and stores hurts the optimizer that I have seen, and I've looked
extensively at the code produced both from more and less aggressive
approaches to this problem.
However, it is quite tricky to actually do this in SROA. We may have
loads and stores to the same alloca, or other complex patterns that are
hard to handle. This complexity leads to the somewhat subtle algorithm
implemented here. We have to do this entire process as a separate pass
over the partitioning of the alloca, and split up all of the loads prior
to splitting the stores so that we can handle safely the cases of
overlapping, including partially overlapping, loads and stores to the
same alloca. We also have to reconstitute the post-split slice
configuration so we can avoid iterating again over all the alloca uses
(the slow part of SROA). But we also have to ensure that when we split
up loads and stores to *other* allocas, we *do* re-iterate over them in
SROA to adapt to the more refined partitioning now required.
With this, I actually think we can fix a long-standing TODO in SROA
where I avoided splitting as many loads and stores as probably should be
splittable. This limitation historically mitigated the fallout of all
the bad things mentioned above. Now that we have more intelligent
handling, I plan to remove the FIXME and more aggressively mark integer
loads and stores as splittable. I'll do that in a follow-up patch to
help with bisecting any fallout.
The net result of this change should be more fine-grained and accurate
scalars being formed out of aggregates. At the very least, Clang now
generates perfect code for this high-level test case using
std::complex<float>:
#include <complex>
void g1(std::complex<float> &x, float a, float b) {
x += std::complex<float>(a, b);
}
void g2(std::complex<float> &x, float a, float b) {
x -= std::complex<float>(a, b);
}
void foo(const std::complex<float> &x, float a, float b,
std::complex<float> &x1, std::complex<float> &x2) {
std::complex<float> l1 = x;
g1(l1, a, b);
std::complex<float> l2 = x;
g2(l2, a, b);
x1 = l1;
x2 = l2;
}
This code isn't just hypothetical either. It was reduced out of the hot
inner loops of essentially every part of the Eigen math library when
using std::complex<float>. Those loops would consistently and
pervasively hop between the floating point unit and the integer unit due
to bit math extraction and insertion of floating point values that were
"stored" in a 64-bit integer register around the loop backedge.
So far, this change has passed a bootstrap and I have done some other
testing and so far, no issues. That doesn't mean there won't be though,
so I'll be prepared to help with any fallout. If you performance swings
in particular, please let me know. I'm very curious what all the impact
of this change will be. Stay tuned for the follow-up to also split more
integer loads and stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225061
91177308-0d34-0410-b5e6-
96231b3b80d8
Michael Gottesman [Thu, 1 Jan 2015 08:05:41 +0000 (08:05 +0000)]
Just use a using directive in SmallMapVector instead of inheriting from MapVector itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225059
91177308-0d34-0410-b5e6-
96231b3b80d8
Hal Finkel [Thu, 1 Jan 2015 02:53:29 +0000 (02:53 +0000)]
[PowerPC] Improve instruction selection bit-permuting operations (64-bit)
This is the second installment of improvements to instruction selection for "bit
permutation" instruction sequences. r224318 added logic for instruction
selection for 32-bit bit permutation sequences, and this adds lowering for
64-bit sequences. The 64-bit sequences are more complicated than the 32-bit
ones because:
a) the 64-bit versions of the 32-bit rotate-and-mask instructions
work by replicating the lower 32-bits of the value-to-be-rotated into the
upper 32 bits -- and integrating this into the cost modeling for the various
bit group operations is non-trivial
b) unlike the 32-bit instructions in 32-bit mode, the rotate-and-mask instructions
cannot, in one instruction, specify the
mask starting index, the mask ending index, and the rotation factor. Also,
forming arbitrary 64-bit constants is more complicated than in 32-bit mode
because the number of instructions necessary is value dependent.
Plus, support for 'late masking' was added: it is sometimes more efficient to
treat the overall value as if it had no mandatory zero bits when planning the
bit-group insertions, and then mask them in at the very end. Unfortunately, as
the structure of the bit groups is different in the two cases, the more
feasible implementation technique was to generate both instruction sequences,
and then pick the shorter one.
And finally, we now generate reasonable code for i64 bswap:
rldicl 5, 3, 16, 0
rldicl 4, 3, 8, 0
rldicl 6, 3, 24, 0
rldimi 4, 5, 8, 48
rldicl 5, 3, 32, 0
rldimi 4, 6, 16, 40
rldicl 6, 3, 48, 0
rldimi 4, 5, 24, 32
rldicl 5, 3, 56, 0
rldimi 4, 6, 40, 16
rldimi 4, 5, 48, 8
rldimi 4, 3, 56, 0
vs. what we used to produce:
li 4, 255
rldicl 5, 3, 24, 40
rldicl 6, 3, 40, 24
rldicl 7, 3, 56, 8
sldi 8, 3, 8
sldi 10, 3, 24
sldi 12, 3, 40
rldicl 0, 3, 8, 56
sldi 9, 4, 32
sldi 11, 4, 40
sldi 4, 4, 48
andi. 5, 5, 65280
andis. 6, 6, 255
andis. 7, 7, 65280
sldi 3, 3, 56
and 8, 8, 9
and 4, 12, 4
and 9, 10, 11
or 6, 7, 6
or 5, 5, 0
or 3, 3, 4
or 7, 9, 8
or 4, 6, 5
or 3, 3, 7
or 3, 3, 4
which is 12 instructions, instead of 25, and seems optimal (at least in terms
of code size).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225056
91177308-0d34-0410-b5e6-
96231b3b80d8
Michael Gottesman [Wed, 31 Dec 2014 23:33:24 +0000 (23:33 +0000)]
Add 2x constructors for TinyPtrVector, one that takes in one elemenet and the other that takes in an ArrayRef<EltTy>
Currently one can only construct an empty TinyPtrVector. These are just missing
elements of the API.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225055
91177308-0d34-0410-b5e6-
96231b3b80d8
Michael Gottesman [Wed, 31 Dec 2014 23:33:21 +0000 (23:33 +0000)]
Add a SmallMapVector class that is a MapVector with a Map of SmallDenseMap and a Vector of SmallVector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225054
91177308-0d34-0410-b5e6-
96231b3b80d8
Michael Gottesman [Wed, 31 Dec 2014 23:33:18 +0000 (23:33 +0000)]
Add an ArrayRef upcasting constructor from ArrayRef<U*> -> ArrayRef<T*> where T is a base of U.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225053
91177308-0d34-0410-b5e6-
96231b3b80d8
Sanjay Patel [Wed, 31 Dec 2014 22:14:05 +0000 (22:14 +0000)]
InstCombine: fsub nsz 0, X ==> fsub nsz -0.0, X
Some day the backend may handle instruction-level fast math flags and make
this transform unnecessary, but it's still better practice to use the canonical
representation of fneg when possible (use a -0.0).
This is a partial fix for PR20870 ( http://llvm.org/bugs/show_bug.cgi?id=20870 ).
See also http://reviews.llvm.org/D6723.
Differential Revision: http://reviews.llvm.org/D6731
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225050
91177308-0d34-0410-b5e6-
96231b3b80d8
Rafael Espindola [Wed, 31 Dec 2014 17:19:34 +0000 (17:19 +0000)]
Add r224985 back with a fix.
The issues was that AArch64 has additional restrictions on when local
relocations can be used. We have to take those into consideration when
deciding to put a L symbol in the symbol table or not.
Original message:
Remove doesSectionRequireSymbols.
In an assembly expression like
bar:
.long L0 + 1
the intended semantics is that bar will contain a pointer one byte past L0.
In sections that are merged by content (strings, 4 byte constants, etc), a
single position in the section doesn't give the linker enough information.
For example, it would not be able to tell a relocation must point to the
end of a string, since that would look just like the start of the next.
The solution used in ELF to use relocation with symbols if there is a non-zero
addend.
In MachO before this patch we would just keep all symbols in some sections.
This would miss some cases (only cstrings on x86_64 were implemented) and was
inefficient since most relocations have an addend of 0 and can be represented
without the symbol.
This patch implements the non-zero addend logic for MachO too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225048
91177308-0d34-0410-b5e6-
96231b3b80d8
Colin LeMahieu [Wed, 31 Dec 2014 17:14:35 +0000 (17:14 +0000)]
Reverting 225045 and 225043 and XFAIL multiline.ll on hexagon
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225047
91177308-0d34-0410-b5e6-
96231b3b80d8
Rafael Espindola [Wed, 31 Dec 2014 16:58:05 +0000 (16:58 +0000)]
Add a test for the recent compiler-rt build failure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225046
91177308-0d34-0410-b5e6-
96231b3b80d8
Colin LeMahieu [Wed, 31 Dec 2014 16:20:00 +0000 (16:20 +0000)]
[Hexagon] Removing assertion to appease buildbot until I can reproduce the problem
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225045
91177308-0d34-0410-b5e6-
96231b3b80d8
Rafael Espindola [Wed, 31 Dec 2014 16:06:48 +0000 (16:06 +0000)]
Revert "Remove doesSectionRequireSymbols."
This reverts commit r224985.
I am investigating why it made an Apple bot unhappy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225044
91177308-0d34-0410-b5e6-
96231b3b80d8
Colin LeMahieu [Wed, 31 Dec 2014 15:57:38 +0000 (15:57 +0000)]
[Hexagon] Changing an llvm_unreachable to an assertion and returning 0. Relocations aren't implemented yet but we don't need to abort for this in release builds.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225043
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Wed, 31 Dec 2014 07:24:23 +0000 (07:24 +0000)]
[X86] Update disassembler tests for absolute move instructions to check the encodings. This provides testing for r225036. 64-bit mode is still broken.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225037
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Wed, 31 Dec 2014 07:07:31 +0000 (07:07 +0000)]
[X86] Fix disassembly of absolute moves to work correctly in 16 and 32-bit modes with all 4 combinations of OpSize and AdSize prefixes being present or not.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225036
91177308-0d34-0410-b5e6-
96231b3b80d8
Craig Topper [Wed, 31 Dec 2014 07:07:11 +0000 (07:07 +0000)]
[x86] Simplify detection of jcxz/jecxz/jrcxz in disassembler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225035
91177308-0d34-0410-b5e6-
96231b3b80d8
David Majnemer [Wed, 31 Dec 2014 04:21:41 +0000 (04:21 +0000)]
InstCombine: try to transform A-B < 0 into A < B
We are allowed to move the 'B' to the right hand side if we an prove
there is no signed overflow and if the comparison itself is signed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225034
91177308-0d34-0410-b5e6-
96231b3b80d8
Alexey Samsonov [Wed, 31 Dec 2014 00:40:28 +0000 (00:40 +0000)]
Revert "merge consecutive stores of extracted vector elements"
This reverts commit r224611. This change causes crashes
in X86 DAG->DAG Instruction Selection.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225031
91177308-0d34-0410-b5e6-
96231b3b80d8