Skip to content

Commit

Permalink
Update faq.md (#4893)
Browse files Browse the repository at this point in the history
various minor editorial updates - style, grammar, typos.
  • Loading branch information
badenh authored Feb 17, 2020
1 parent 95de08b commit a43e326
Showing 1 changed file with 9 additions and 9 deletions.
18 changes: 9 additions & 9 deletions docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,24 +26,24 @@ See [Installation](http://docs.tvm.ai/install/)
TVM's relation to Other IR/DSL Projects
---------------------------------------
There are usually two levels of abstractions of IR in the deep learning systems.
TensorFlow's XLA and Intel's ngraph uses computation graph representation.
TensorFlow's XLA and Intel's ngraph both use a computation graph representation.
This representation is high level, and can be helpful to perform generic optimizations
such as memory reuse, layout transformation and automatic differentiation.

TVM adopts a low level representation, that explicitly express the choice of memory
TVM adopts a low-level representation, that explicitly express the choice of memory
layout, parallelization pattern, locality and hardware primitives etc.
This level of IR is closer to directly target hardwares.
The low level IR adopt ideas from existing image processing languages like Halide, darkroom
and loop transformation tools like loopy and polyhedra based analysis.
We specifically focus of expressing deep learning workloads(e.g. recurrence),
The low-level IR adopts ideas from existing image processing languages like Halide, darkroom
and loop transformation tools like loopy and polyhedra-based analysis.
We specifically focus on expressing deep learning workloads (e.g. recurrence),
optimization for different hardware backends and embedding with frameworks to provide
end-to-end compilation stack.


TVM's relation to libDNN cuDNN
TVM's relation to libDNN, cuDNN
------------------------------
TVM can incorporate these library as external calls. One goal of TVM is to be able to
generate high performing kernels. We will evolve TVM an incremental manner as
we learn from the technics of manual kernel crafting and add these as primitives in DSL.
TVM can incorporate these libraries as external calls. One goal of TVM is to be able to
generate high-performing kernels. We will evolve TVM an incremental manner as
we learn from the techniques of manual kernel crafting and add these as primitives in DSL.
See also [TVM Operator Inventory](https://github.com/apache/incubator-tvm/tree/master/topi) for
recipes of operators in TVM.

0 comments on commit a43e326

Please sign in to comment.