aboutsummaryrefslogtreecommitdiff
path: root/include/llvm/CodeGen
diff options
context:
space:
mode:
authorSjoerd Meijer <sjoerd.meijer@arm.com>2019-01-31 08:07:30 +0000
committerSjoerd Meijer <sjoerd.meijer@arm.com>2019-01-31 08:07:30 +0000
commite690485141b8b3bca573813755d3284f5504ccdf (patch)
tree6b2c7f596b75813ffe5ffdf0ff705ca424ee9b85 /include/llvm/CodeGen
parent8829dedd761441dba49e758858177a32dfe116a1 (diff)
downloadllvm-e690485141b8b3bca573813755d3284f5504ccdf.tar.gz
[SelectionDAG] Codesize: don't expand SHIFT to SHIFT_PARTS
And instead just generate a libcall. My motivating example on ARM was a simple: shl i64 %A, %B for which the code bloat is quite significant. For other targets that also accept __int128/i128 such as AArch64 and X86, it is also beneficial for these cases to generate a libcall when optimising for minsize. On these 64-bit targets, the 64-bits shifts are of course unaffected because the SHIFT/SHIFT_PARTS lowering operation action is not set to custom/expand. Differential Revision: https://reviews.llvm.org/D57386 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@352736 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'include/llvm/CodeGen')
-rw-r--r--include/llvm/CodeGen/TargetLowering.h7
1 files changed, 7 insertions, 0 deletions
diff --git a/include/llvm/CodeGen/TargetLowering.h b/include/llvm/CodeGen/TargetLowering.h
index c21eb7911a1..72535c568a1 100644
--- a/include/llvm/CodeGen/TargetLowering.h
+++ b/include/llvm/CodeGen/TargetLowering.h
@@ -642,6 +642,13 @@ public:
return RepRegClassCostForVT[VT.SimpleTy];
}
+ /// Return true if SHIFT instructions should be expanded to SHIFT_PARTS
+ /// instructions, and false if a library call is preferred (e.g for code-size
+ /// reasons).
+ virtual bool shouldExpandShift(SelectionDAG &DAG, SDNode *N) const {
+ return true;
+ }
+
/// Return true if the target has native support for the specified value type.
/// This means that it has a register that directly holds it without
/// promotions or expansions.