aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorDon Hinton <hintonda@gmail.com>2017-09-17 00:24:43 +0000
committerDon Hinton <hintonda@gmail.com>2017-09-17 00:24:43 +0000
commit95ed232373a9d7b86f7f42171c57209ec2f83579 (patch)
tree98d73d8a83b168b39fa256d17f10b1125e53476c /docs
parentf0a489a5b254ed0bc96f675c4368f031c9e7e789 (diff)
downloadllvm-95ed232373a9d7b86f7f42171c57209ec2f83579.tar.gz
[ORC][Kaleidoscope] Update ORCJit tutorial.
Summary: Fix a few typos and update names to match current source. Differential Revision: https://reviews.llvm.org/D37948 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@313473 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'docs')
-rw-r--r--docs/tutorial/BuildingAJIT1.rst87
-rw-r--r--docs/tutorial/BuildingAJIT2.rst77
-rw-r--r--docs/tutorial/BuildingAJIT3.rst33
-rw-r--r--docs/tutorial/BuildingAJIT4.rst2
-rw-r--r--docs/tutorial/BuildingAJIT5.rst4
5 files changed, 101 insertions, 102 deletions
diff --git a/docs/tutorial/BuildingAJIT1.rst b/docs/tutorial/BuildingAJIT1.rst
index d1dd46fa2a9..9d7f5047783 100644
--- a/docs/tutorial/BuildingAJIT1.rst
+++ b/docs/tutorial/BuildingAJIT1.rst
@@ -79,7 +79,7 @@ will look like:
int Result = Main();
J.removeModule(H);
-The APIs that we build in these tutorials will all be aovariations on this simple
+The APIs that we build in these tutorials will all be variations on this simple
theme. Behind the API we will refine the implementation of the JIT to add
support for optimization and lazy compilation. Eventually we will extend the
API itself to allow higher-level program representations (e.g. ASTs) to be
@@ -112,11 +112,14 @@ usual include guards and #includes [2]_, we get to the definition of our class:
#include "llvm/ADT/STLExtras.h"
#include "llvm/ExecutionEngine/ExecutionEngine.h"
+ #include "llvm/ExecutionEngine/JITSymbol.h"
#include "llvm/ExecutionEngine/RTDyldMemoryManager.h"
+ #include "llvm/ExecutionEngine/SectionMemoryManager.h"
#include "llvm/ExecutionEngine/Orc/CompileUtils.h"
#include "llvm/ExecutionEngine/Orc/IRCompileLayer.h"
#include "llvm/ExecutionEngine/Orc/LambdaResolver.h"
- #include "llvm/ExecutionEngine/Orc/ObjectLinkingLayer.h"
+ #include "llvm/ExecutionEngine/Orc/RTDyldObjectLinkingLayer.h"
+ #include "llvm/IR/DataLayout.h"
#include "llvm/IR/Mangler.h"
#include "llvm/Support/DynamicLibrary.h"
#include "llvm/Support/raw_ostream.h"
@@ -159,7 +162,7 @@ That's it for member variables, after that we have a single typedef:
ModuleHandle. This is the handle type that will be returned from our JIT's
addModule method, and can be passed to the removeModule method to remove a
module. The IRCompileLayer class already provides a convenient handle type
-(IRCompileLayer::ModuleSetHandleT), so we just alias our ModuleHandle to this.
+(IRCompileLayer::ModuleHandleT), so we just alias our ModuleHandle to this.
.. code-block:: c++
@@ -181,7 +184,7 @@ each module that is added (a JIT memory manager manages memory allocations,
memory permissions, and registration of exception handlers for JIT'd code). For
this we use a lambda that returns a SectionMemoryManager, an off-the-shelf
utility that provides all the basic memory management functionality required for
-this chapter. Next we initialize our CompileLayer. The Compile laye needs two
+this chapter. Next we initialize our CompileLayer. The CompileLayer needs two
things: (1) A reference to our object layer, and (2) a compiler instance to use
to perform the actual compilation from IR to object files. We use the
off-the-shelf SimpleCompiler instance for now. Finally, in the body of the
@@ -220,7 +223,7 @@ Now we come to the first of our JIT API methods: addModule. This method is
responsible for adding IR to the JIT and making it available for execution. In
this initial implementation of our JIT we will make our modules "available for
execution" by adding them straight to the CompileLayer, which will immediately
-compile them. In later chapters we will teach our JIT to be defer compilation
+compile them. In later chapters we will teach our JIT to defer compilation
of individual functions until they're actually called.
To add our module to the CompileLayer we need to supply both the module and a
@@ -335,7 +338,7 @@ example, use:
.. code-block:: bash
# Compile
- clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orc native` -O3 -o toy
+ clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orcjit native` -O3 -o toy
# Run
./toy
@@ -351,45 +354,45 @@ Here is the code:
left as an exercise for the reader. (The KaleidoscopeJIT.h used in the
original tutorials will be a helpful reference).
-.. [2] +-----------------------+-----------------------------------------------+
- | File | Reason for inclusion |
- +=======================+===============================================+
- | STLExtras.h | LLVM utilities that are useful when working |
- | | with the STL. |
- +-----------------------+-----------------------------------------------+
- | ExecutionEngine.h | Access to the EngineBuilder::selectTarget |
- | | method. |
- +-----------------------+-----------------------------------------------+
- | | Access to the |
- | RTDyldMemoryManager.h | RTDyldMemoryManager::getSymbolAddressInProcess|
- | | method. |
- +-----------------------+-----------------------------------------------+
- | CompileUtils.h | Provides the SimpleCompiler class. |
- +-----------------------+-----------------------------------------------+
- | IRCompileLayer.h | Provides the IRCompileLayer class. |
- +-----------------------+-----------------------------------------------+
- | | Access the createLambdaResolver function, |
- | LambdaResolver.h | which provides easy construction of symbol |
- | | resolvers. |
- +-----------------------+-----------------------------------------------+
- | ObjectLinkingLayer.h | Provides the ObjectLinkingLayer class. |
- +-----------------------+-----------------------------------------------+
- | Mangler.h | Provides the Mangler class for platform |
- | | specific name-mangling. |
- +-----------------------+-----------------------------------------------+
- | DynamicLibrary.h | Provides the DynamicLibrary class, which |
- | | makes symbols in the host process searchable. |
- +-----------------------+-----------------------------------------------+
- | | A fast output stream class. We use the |
- | raw_ostream.h | raw_string_ostream subclass for symbol |
- | | mangling |
- +-----------------------+-----------------------------------------------+
- | TargetMachine.h | LLVM target machine description class. |
- +-----------------------+-----------------------------------------------+
+.. [2] +-----------------------------+-----------------------------------------------+
+ | File | Reason for inclusion |
+ +=============================+===============================================+
+ | STLExtras.h | LLVM utilities that are useful when working |
+ | | with the STL. |
+ +-----------------------------+-----------------------------------------------+
+ | ExecutionEngine.h | Access to the EngineBuilder::selectTarget |
+ | | method. |
+ +-----------------------------+-----------------------------------------------+
+ | | Access to the |
+ | RTDyldMemoryManager.h | RTDyldMemoryManager::getSymbolAddressInProcess|
+ | | method. |
+ +-----------------------------+-----------------------------------------------+
+ | CompileUtils.h | Provides the SimpleCompiler class. |
+ +-----------------------------+-----------------------------------------------+
+ | IRCompileLayer.h | Provides the IRCompileLayer class. |
+ +-----------------------------+-----------------------------------------------+
+ | | Access the createLambdaResolver function, |
+ | LambdaResolver.h | which provides easy construction of symbol |
+ | | resolvers. |
+ +-----------------------------+-----------------------------------------------+
+ | RTDyldObjectLinkingLayer.h | Provides the RTDyldObjectLinkingLayer class. |
+ +-----------------------------+-----------------------------------------------+
+ | Mangler.h | Provides the Mangler class for platform |
+ | | specific name-mangling. |
+ +-----------------------------+-----------------------------------------------+
+ | DynamicLibrary.h | Provides the DynamicLibrary class, which |
+ | | makes symbols in the host process searchable. |
+ +-----------------------------+-----------------------------------------------+
+ | | A fast output stream class. We use the |
+ | raw_ostream.h | raw_string_ostream subclass for symbol |
+ | | mangling |
+ +-----------------------------+-----------------------------------------------+
+ | TargetMachine.h | LLVM target machine description class. |
+ +-----------------------------+-----------------------------------------------+
.. [3] Actually they don't have to be lambdas, any object with a call operator
will do, including plain old functions or std::functions.
.. [4] ``JITSymbol::getAddress`` will force the JIT to compile the definition of
the symbol if it hasn't already been compiled, and since the compilation
- process could fail getAddress must be able to return this failure. \ No newline at end of file
+ process could fail getAddress must be able to return this failure.
diff --git a/docs/tutorial/BuildingAJIT2.rst b/docs/tutorial/BuildingAJIT2.rst
index 2f22bdad6c1..f1861033cc7 100644
--- a/docs/tutorial/BuildingAJIT2.rst
+++ b/docs/tutorial/BuildingAJIT2.rst
@@ -46,7 +46,7 @@ Chapter 1 and compose an ORC *IRTransformLayer* on top. We will look at how the
IRTransformLayer works in more detail below, but the interface is simple: the
constructor for this layer takes a reference to the layer below (as all layers
do) plus an *IR optimization function* that it will apply to each Module that
-is added via addModuleSet:
+is added via addModule:
.. code-block:: c++
@@ -54,19 +54,20 @@ is added via addModuleSet:
private:
std::unique_ptr<TargetMachine> TM;
const DataLayout DL;
- ObjectLinkingLayer<> ObjectLayer;
+ RTDyldObjectLinkingLayer<> ObjectLayer;
IRCompileLayer<decltype(ObjectLayer)> CompileLayer;
- typedef std::function<std::unique_ptr<Module>(std::unique_ptr<Module>)>
- OptimizeFunction;
+ using OptimizeFunction =
+ std::function<std::shared_ptr<Module>(std::shared_ptr<Module>)>;
IRTransformLayer<decltype(CompileLayer), OptimizeFunction> OptimizeLayer;
public:
- typedef decltype(OptimizeLayer)::ModuleSetHandleT ModuleHandle;
+ using ModuleHandle = decltype(OptimizeLayer)::ModuleHandleT;
KaleidoscopeJIT()
: TM(EngineBuilder().selectTarget()), DL(TM->createDataLayout()),
+ ObjectLayer([]() { return std::make_shared<SectionMemoryManager>(); }),
CompileLayer(ObjectLayer, SimpleCompiler(*TM)),
OptimizeLayer(CompileLayer,
[this](std::unique_ptr<Module> M) {
@@ -101,9 +102,8 @@ define below.
.. code-block:: c++
// ...
- return OptimizeLayer.addModuleSet(std::move(Ms),
- make_unique<SectionMemoryManager>(),
- std::move(Resolver));
+ return cantFail(OptimizeLayer.addModule(std::move(M),
+ std::move(Resolver)));
// ...
.. code-block:: c++
@@ -115,17 +115,17 @@ define below.
.. code-block:: c++
// ...
- OptimizeLayer.removeModuleSet(H);
+ cantFail(OptimizeLayer.removeModule(H));
// ...
Next we need to replace references to 'CompileLayer' with references to
OptimizeLayer in our key methods: addModule, findSymbol, and removeModule. In
addModule we need to be careful to replace both references: the findSymbol call
-inside our resolver, and the call through to addModuleSet.
+inside our resolver, and the call through to addModule.
.. code-block:: c++
- std::unique_ptr<Module> optimizeModule(std::unique_ptr<Module> M) {
+ std::shared_ptr<Module> optimizeModule(std::shared_ptr<Module> M) {
// Create a function pass manager.
auto FPM = llvm::make_unique<legacy::FunctionPassManager>(M.get());
@@ -166,37 +166,30 @@ implementations of the layer concept that can be devised:
template <typename BaseLayerT, typename TransformFtor>
class IRTransformLayer {
public:
- typedef typename BaseLayerT::ModuleSetHandleT ModuleSetHandleT;
+ using ModuleHandleT = typename BaseLayerT::ModuleHandleT;
IRTransformLayer(BaseLayerT &BaseLayer,
TransformFtor Transform = TransformFtor())
: BaseLayer(BaseLayer), Transform(std::move(Transform)) {}
- template <typename ModuleSetT, typename MemoryManagerPtrT,
- typename SymbolResolverPtrT>
- ModuleSetHandleT addModuleSet(ModuleSetT Ms,
- MemoryManagerPtrT MemMgr,
- SymbolResolverPtrT Resolver) {
-
- for (auto I = Ms.begin(), E = Ms.end(); I != E; ++I)
- *I = Transform(std::move(*I));
-
- return BaseLayer.addModuleSet(std::move(Ms), std::move(MemMgr),
- std::move(Resolver));
+ Expected<ModuleHandleT>
+ addModule(std::shared_ptr<Module> M,
+ std::shared_ptr<JITSymbolResolver> Resolver) {
+ return BaseLayer.addModule(Transform(std::move(M)), std::move(Resolver));
}
- void removeModuleSet(ModuleSetHandleT H) { BaseLayer.removeModuleSet(H); }
+ void removeModule(ModuleHandleT H) { BaseLayer.removeModule(H); }
JITSymbol findSymbol(const std::string &Name, bool ExportedSymbolsOnly) {
return BaseLayer.findSymbol(Name, ExportedSymbolsOnly);
}
- JITSymbol findSymbolIn(ModuleSetHandleT H, const std::string &Name,
+ JITSymbol findSymbolIn(ModuleHandleT H, const std::string &Name,
bool ExportedSymbolsOnly) {
return BaseLayer.findSymbolIn(H, Name, ExportedSymbolsOnly);
}
- void emitAndFinalize(ModuleSetHandleT H) {
+ void emitAndFinalize(ModuleHandleT H) {
BaseLayer.emitAndFinalize(H);
}
@@ -215,14 +208,14 @@ comments. It is a template class with two template arguments: ``BaesLayerT`` and
``TransformFtor`` that provide the type of the base layer and the type of the
"transform functor" (in our case a std::function) respectively. This class is
concerned with two very simple jobs: (1) Running every IR Module that is added
-with addModuleSet through the transform functor, and (2) conforming to the ORC
+with addModule through the transform functor, and (2) conforming to the ORC
layer interface. The interface consists of one typedef and five methods:
+------------------+-----------------------------------------------------------+
| Interface | Description |
+==================+===========================================================+
| | Provides a handle that can be used to identify a module |
-| ModuleSetHandleT | set when calling findSymbolIn, removeModuleSet, or |
+| ModuleHandleT | set when calling findSymbolIn, removeModule, or |
| | emitAndFinalize. |
+------------------+-----------------------------------------------------------+
| | Takes a given set of Modules and makes them "available |
@@ -231,28 +224,28 @@ layer interface. The interface consists of one typedef and five methods:
| | the address of the symbols should be read/writable (for |
| | data symbols), or executable (for function symbols) after |
| | JITSymbol::getAddress() is called. Note: This means that |
-| addModuleSet | addModuleSet doesn't have to compile (or do any other |
+| addModule | addModule doesn't have to compile (or do any other |
| | work) up-front. It *can*, like IRCompileLayer, act |
| | eagerly, but it can also simply record the module and |
| | take no further action until somebody calls |
| | JITSymbol::getAddress(). In IRTransformLayer's case |
-| | addModuleSet eagerly applies the transform functor to |
+| | addModule eagerly applies the transform functor to |
| | each module in the set, then passes the resulting set |
| | of mutated modules down to the layer below. |
+------------------+-----------------------------------------------------------+
| | Removes a set of modules from the JIT. Code or data |
-| removeModuleSet | defined in these modules will no longer be available, and |
+| removeModule | defined in these modules will no longer be available, and |
| | the memory holding the JIT'd definitions will be freed. |
+------------------+-----------------------------------------------------------+
| | Searches for the named symbol in all modules that have |
-| | previously been added via addModuleSet (and not yet |
-| findSymbol | removed by a call to removeModuleSet). In |
+| | previously been added via addModule (and not yet |
+| findSymbol | removed by a call to removeModule). In |
| | IRTransformLayer we just pass the query on to the layer |
| | below. In our REPL this is our default way to search for |
| | function definitions. |
+------------------+-----------------------------------------------------------+
| | Searches for the named symbol in the module set indicated |
-| | by the given ModuleSetHandleT. This is just an optimized |
+| | by the given ModuleHandleT. This is just an optimized |
| | search, better for lookup-speed when you know exactly |
| | a symbol definition should be found. In IRTransformLayer |
| findSymbolIn | we just pass this query on to the layer below. In our |
@@ -262,7 +255,7 @@ layer interface. The interface consists of one typedef and five methods:
| | we just added. |
+------------------+-----------------------------------------------------------+
| | Forces all of the actions required to make the code and |
-| | data in a module set (represented by a ModuleSetHandleT) |
+| | data in a module set (represented by a ModuleHandleT) |
| | accessible. Behaves as if some symbol in the set had been |
| | searched for and JITSymbol::getSymbolAddress called. This |
| emitAndFinalize | is rarely needed, but can be useful when dealing with |
@@ -276,11 +269,11 @@ wrinkles like emitAndFinalize for performance), similar to the basic JIT API
operations we identified in Chapter 1. Conforming to the layer concept allows
classes to compose neatly by implementing their behaviors in terms of the these
same operations, carried out on the layer below. For example, an eager layer
-(like IRTransformLayer) can implement addModuleSet by running each module in the
+(like IRTransformLayer) can implement addModule by running each module in the
set through its transform up-front and immediately passing the result to the
-layer below. A lazy layer, by contrast, could implement addModuleSet by
+layer below. A lazy layer, by contrast, could implement addModule by
squirreling away the modules doing no other up-front work, but applying the
-transform (and calling addModuleSet on the layer below) when the client calls
+transform (and calling addModule on the layer below) when the client calls
findSymbol instead. The JIT'd program behavior will be the same either way, but
these choices will have different performance characteristics: Doing work
eagerly means the JIT takes longer up-front, but proceeds smoothly once this is
@@ -319,7 +312,7 @@ IRTransformLayer added to enable optimization. To build this example, use:
.. code-block:: bash
# Compile
- clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orc native` -O3 -o toy
+ clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orcjit native` -O3 -o toy
# Run
./toy
@@ -329,8 +322,8 @@ Here is the code:
:language: c++
.. [1] When we add our top-level expression to the JIT, any calls to functions
- that we defined earlier will appear to the ObjectLinkingLayer as
- external symbols. The ObjectLinkingLayer will call the SymbolResolver
- that we defined in addModuleSet, which in turn calls findSymbol on the
+ that we defined earlier will appear to the RTDyldObjectLinkingLayer as
+ external symbols. The RTDyldObjectLinkingLayer will call the SymbolResolver
+ that we defined in addModule, which in turn calls findSymbol on the
OptimizeLayer, at which point even a lazy transform layer will have to
do its work.
diff --git a/docs/tutorial/BuildingAJIT3.rst b/docs/tutorial/BuildingAJIT3.rst
index 071e92c7454..9c4e59fe117 100644
--- a/docs/tutorial/BuildingAJIT3.rst
+++ b/docs/tutorial/BuildingAJIT3.rst
@@ -21,7 +21,7 @@ Lazy Compilation
When we add a module to the KaleidoscopeJIT class from Chapter 2 it is
immediately optimized, compiled and linked for us by the IRTransformLayer,
-IRCompileLayer and ObjectLinkingLayer respectively. This scheme, where all the
+IRCompileLayer and RTDyldObjectLinkingLayer respectively. This scheme, where all the
work to make a Module executable is done up front, is simple to understand and
its performance characteristics are easy to reason about. However, it will lead
to very high startup times if the amount of code to be compiled is large, and
@@ -33,7 +33,7 @@ the ORC APIs provide us with a layer to lazily compile LLVM IR:
*CompileOnDemandLayer*.
The CompileOnDemandLayer class conforms to the layer interface described in
-Chapter 2, but its addModuleSet method behaves quite differently from the layers
+Chapter 2, but its addModule method behaves quite differently from the layers
we have seen so far: rather than doing any work up front, it just scans the
Modules being added and arranges for each function in them to be compiled the
first time it is called. To do this, the CompileOnDemandLayer creates two small
@@ -73,21 +73,22 @@ lazy compilation. We just need a few changes to the source:
private:
std::unique_ptr<TargetMachine> TM;
const DataLayout DL;
- std::unique_ptr<JITCompileCallbackManager> CompileCallbackManager;
- ObjectLinkingLayer<> ObjectLayer;
- IRCompileLayer<decltype(ObjectLayer)> CompileLayer;
+ RTDyldObjectLinkingLayer ObjectLayer;
+ IRCompileLayer<decltype(ObjectLayer), SimpleCompiler> CompileLayer;
- typedef std::function<std::unique_ptr<Module>(std::unique_ptr<Module>)>
- OptimizeFunction;
+ using OptimizeFunction =
+ std::function<std::shared_ptr<Module>(std::shared_ptr<Module>)>;
IRTransformLayer<decltype(CompileLayer), OptimizeFunction> OptimizeLayer;
+
+ std::unique_ptr<JITCompileCallbackManager> CompileCallbackManager;
CompileOnDemandLayer<decltype(OptimizeLayer)> CODLayer;
public:
- typedef decltype(CODLayer)::ModuleSetHandleT ModuleHandle;
+ using ModuleHandle = decltype(CODLayer)::ModuleHandleT;
First we need to include the CompileOnDemandLayer.h header, then add two new
-members: a std::unique_ptr<CompileCallbackManager> and a CompileOnDemandLayer,
+members: a std::unique_ptr<JITCompileCallbackManager> and a CompileOnDemandLayer,
to our class. The CompileCallbackManager member is used by the CompileOnDemandLayer
to create the compile callback needed for each function.
@@ -95,9 +96,10 @@ to create the compile callback needed for each function.
KaleidoscopeJIT()
: TM(EngineBuilder().selectTarget()), DL(TM->createDataLayout()),
+ ObjectLayer([]() { return std::make_shared<SectionMemoryManager>(); }),
CompileLayer(ObjectLayer, SimpleCompiler(*TM)),
OptimizeLayer(CompileLayer,
- [this](std::unique_ptr<Module> M) {
+ [this](std::shared_ptr<Module> M) {
return optimizeModule(std::move(M));
}),
CompileCallbackManager(
@@ -133,7 +135,7 @@ our CompileCallbackManager. Finally, we need to supply an "indirect stubs
manager builder": a utility function that constructs IndirectStubManagers, which
are in turn used to build the stubs for the functions in each module. The
CompileOnDemandLayer will call the indirect stub manager builder once for each
-call to addModuleSet, and use the resulting indirect stubs manager to create
+call to addModule, and use the resulting indirect stubs manager to create
stubs for all functions in all modules in the set. If/when the module set is
removed from the JIT the indirect stubs manager will be deleted, freeing any
memory allocated to the stubs. We supply this function by using the
@@ -144,9 +146,8 @@ createLocalIndirectStubsManagerBuilder utility.
// ...
if (auto Sym = CODLayer.findSymbol(Name, false))
// ...
- return CODLayer.addModuleSet(std::move(Ms),
- make_unique<SectionMemoryManager>(),
- std::move(Resolver));
+ return cantFail(CODLayer.addModule(std::move(Ms),
+ std::move(Resolver)));
// ...
// ...
@@ -154,7 +155,7 @@ createLocalIndirectStubsManagerBuilder utility.
// ...
// ...
- CODLayer.removeModuleSet(H);
+ CODLayer.removeModule(H);
// ...
Finally, we need to replace the references to OptimizeLayer in our addModule,
@@ -173,7 +174,7 @@ layer added to enable lazy function-at-a-time compilation. To build this example
.. code-block:: bash
# Compile
- clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orc native` -O3 -o toy
+ clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orcjit native` -O3 -o toy
# Run
./toy
diff --git a/docs/tutorial/BuildingAJIT4.rst b/docs/tutorial/BuildingAJIT4.rst
index 39d9198a85c..3d3f81e4385 100644
--- a/docs/tutorial/BuildingAJIT4.rst
+++ b/docs/tutorial/BuildingAJIT4.rst
@@ -36,7 +36,7 @@ Kaleidoscope ASTS. To build this example, use:
.. code-block:: bash
# Compile
- clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orc native` -O3 -o toy
+ clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orcjit native` -O3 -o toy
# Run
./toy
diff --git a/docs/tutorial/BuildingAJIT5.rst b/docs/tutorial/BuildingAJIT5.rst
index 94ea92ce5ad..0fda8610efb 100644
--- a/docs/tutorial/BuildingAJIT5.rst
+++ b/docs/tutorial/BuildingAJIT5.rst
@@ -40,8 +40,10 @@ Kaleidoscope ASTS. To build this example, use:
.. code-block:: bash
# Compile
- clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orc native` -O3 -o toy
+ clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orcjit native` -O3 -o toy
+ clang++ -g Server/server.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orcjit native` -O3 -o toy-server
# Run
+ ./toy-server &
./toy
Here is the code for the modified KaleidoscopeJIT: