X-Git-Url: http://demsky.eecs.uci.edu/git/?a=blobdiff_plain;f=lib%2FCodeGen%2FREADME.txt;h=7f75f65167a382b515f79f547f2988ba06a66564;hb=08368387a450dc2b5681000e2728ec702a8f1197;hp=419f885feed67198f73b9161a22f7142e5a98957;hpb=a469b69ddae4665aec02fd75a51c31ad1cff043d;p=oota-llvm.git diff --git a/lib/CodeGen/README.txt b/lib/CodeGen/README.txt index 419f885feed..7f75f65167a 100644 --- a/lib/CodeGen/README.txt +++ b/lib/CodeGen/README.txt @@ -26,45 +26,7 @@ and then "merge" mul and mov: sxth r3, r3 mla r4, r3, lr, r4 -It also increase the likelyhood the store may become dead. - -//===---------------------------------------------------------------------===// - -I think we should have a "hasSideEffects" flag (which is automatically set for -stuff that "isLoad" "isCall" etc), and the remat pass should eventually be able -to remat any instruction that has no side effects, if it can handle it and if -profitable. - -For now, I'd suggest having the remat stuff work like this: - -1. I need to spill/reload this thing. -2. Check to see if it has side effects. -3. Check to see if it is simple enough: e.g. it only has one register -destination and no register input. -4. If so, clone the instruction, do the xform, etc. - -Advantages of this are: - -1. the .td file describes the behavior of the instructions, not the way the - algorithm should work. -2. as remat gets smarter in the future, we shouldn't have to be changing the .td - files. -3. it is easier to explain what the flag means in the .td file, because you - don't have to pull in the explanation of how the current remat algo works. - -Some potential added complexities: - -1. Some instructions have to be glued to it's predecessor or successor. All of - the PC relative instructions and condition code setting instruction. We could - mark them as hasSideEffects, but that's not quite right. PC relative loads - from constantpools can be remat'ed, for example. But it requires more than - just cloning the instruction. Some instructions can be remat'ed but it - expands to more than one instruction. But allocator will have to make a - decision. - -4. As stated in 3, not as simple as cloning in some cases. The target will have - to decide how to remat it. For example, an ARM 2-piece constant generation - instruction is remat'ed as a load from constantpool. +It also increase the likelihood the store may become dead. //===---------------------------------------------------------------------===// @@ -87,14 +49,14 @@ scheduled after any node that reads %reg1039. Use local info (i.e. register scavenger) to assign it a free register to allow reuse: - ldr r3, [sp, #+4] - add r3, r3, #3 - ldr r2, [sp, #+8] - add r2, r2, #2 - ldr r1, [sp, #+4] <== - add r1, r1, #1 - ldr r0, [sp, #+4] - add r0, r0, #2 + ldr r3, [sp, #+4] + add r3, r3, #3 + ldr r2, [sp, #+8] + add r2, r2, #2 + ldr r1, [sp, #+4] <== + add r1, r1, #1 + ldr r0, [sp, #+4] + add r0, r0, #2 //===---------------------------------------------------------------------===// @@ -143,21 +105,95 @@ load [T + 7] ... load [T + 15] //===---------------------------------------------------------------------===// -Tail merging issue: -When we're trying to merge the tails of predecessors of a block I, and there -are more than 2 predecessors, we don't do it optimally. Suppose predecessors -are A,B,C where B and C have 5 instructions in common, and A has 2 in common -with B or C. We want to get: -A: - jmp C3 -B: - jmp C2 -C: -C2: 3 common to B and C but not A -C3: 2 common to all 3 -You get this if B and C are merged first, but currently it might randomly decide -to merge A and B first, which results in not sharing the C2 instructions. We -could look at all N*(N-1) combinations of predecessors and merge the ones with -the most instructions in common first. Usually that will be fast, but it -could get slow on big graphs (e.g. large switches tend to have blocks with many -predecessors). + +It's not always a good idea to choose rematerialization over spilling. If all +the load / store instructions would be folded then spilling is cheaper because +it won't require new live intervals / registers. See 2003-05-31-LongShifts for +an example. + +//===---------------------------------------------------------------------===// + +With a copying garbage collector, derived pointers must not be retained across +collector safe points; the collector could move the objects and invalidate the +derived pointer. This is bad enough in the first place, but safe points can +crop up unpredictably. Consider: + + %array = load { i32, [0 x %obj] }** %array_addr + %nth_el = getelementptr { i32, [0 x %obj] }* %array, i32 0, i32 %n + %old = load %obj** %nth_el + %z = div i64 %x, %y + store %obj* %new, %obj** %nth_el + +If the i64 division is lowered to a libcall, then a safe point will (must) +appear for the call site. If a collection occurs, %array and %nth_el no longer +point into the correct object. + +The fix for this is to copy address calculations so that dependent pointers +are never live across safe point boundaries. But the loads cannot be copied +like this if there was an intervening store, so may be hard to get right. + +Only a concurrent mutator can trigger a collection at the libcall safe point. +So single-threaded programs do not have this requirement, even with a copying +collector. Still, LLVM optimizations would probably undo a front-end's careful +work. + +//===---------------------------------------------------------------------===// + +The ocaml frametable structure supports liveness information. It would be good +to support it. + +//===---------------------------------------------------------------------===// + +The FIXME in ComputeCommonTailLength in BranchFolding.cpp needs to be +revisited. The check is there to work around a misuse of directives in inline +assembly. + +//===---------------------------------------------------------------------===// + +It would be good to detect collector/target compatibility instead of silently +doing the wrong thing. + +//===---------------------------------------------------------------------===// + +It would be really nice to be able to write patterns in .td files for copies, +which would eliminate a bunch of explicit predicates on them (e.g. no side +effects). Once this is in place, it would be even better to have tblgen +synthesize the various copy insertion/inspection methods in TargetInstrInfo. + +//===---------------------------------------------------------------------===// + +Stack coloring improvements: + +1. Do proper LiveStackAnalysis on all stack objects including those which are + not spill slots. +2. Reorder objects to fill in gaps between objects. + e.g. 4, 1, , 4, 1, 1, 1, , 4 => 4, 1, 1, 1, 1, 4, 4 + +//===---------------------------------------------------------------------===// + +The scheduler should be able to sort nearby instructions by their address. For +example, in an expanded memset sequence it's not uncommon to see code like this: + + movl $0, 4(%rdi) + movl $0, 8(%rdi) + movl $0, 12(%rdi) + movl $0, 0(%rdi) + +Each of the stores is independent, and the scheduler is currently making an +arbitrary decision about the order. + +//===---------------------------------------------------------------------===// + +Another opportunitiy in this code is that the $0 could be moved to a register: + + movl $0, 4(%rdi) + movl $0, 8(%rdi) + movl $0, 12(%rdi) + movl $0, 0(%rdi) + +This would save substantial code size, especially for longer sequences like +this. It would be easy to have a rule telling isel to avoid matching MOV32mi +if the immediate has more than some fixed number of uses. It's more involved +to teach the register allocator how to do late folding to recover from +excessive register pressure. +