//===---------------------------------------------------------------------===//
-Instcombine misses several of these cases (see the testcase in the patch):
-http://gcc.gnu.org/ml/gcc-patches/2006-10/msg01519.html
-
-//===---------------------------------------------------------------------===//
-
viterbi speeds up *significantly* if the various "history" related copy loops
are turned into memcpy calls at the source level. We need a "loops to memcpy"
pass.
//===---------------------------------------------------------------------===//
-define i32 @test2(float %X, float %Y) {
-entry:
- %tmp3 = fcmp uno float %X, %Y ; <i1> [#uses=1]
- %tmp34 = zext i1 %tmp3 to i8 ; <i8> [#uses=1]
- %tmp = xor i8 %tmp34, 1 ; <i8> [#uses=1]
- %toBoolnot5 = zext i8 %tmp to i32 ; <i32> [#uses=1]
- ret i32 %toBoolnot5
-}
-
-could be optimized further. Instcombine should use its bitwise analysis to
-collapse the zext/xor/zext structure to an xor/zext and then remove the
-xor by reversing the fcmp.
-
-Desired output:
+We should be able to evaluate this loop:
-define i32 @test2(float %X, float %Y) {
-entry:
- %tmp3 = fcmp ord float %X, %Y ; <i1> [#uses=1]
- %tmp34 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
- ret i32 %tmp34
+int test(int x_offs) {
+ while (x_offs > 4)
+ x_offs -= 4;
+ return x_offs;
}
-To fix this, we need to make CanEvaluateInDifferentType smarter.
-
//===---------------------------------------------------------------------===//
-