From a32c5efcf58ce9d92047ef910f3a26a6ec268847 Mon Sep 17 00:00:00 2001 From: Baden Hughes <580499+badenh@users.noreply.github.com> Date: Mon, 17 Feb 2020 11:56:25 +1000 Subject: [PATCH] Update faq.md (#4893) various minor editorial updates - style, grammar, typos. --- docs/faq.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/faq.md b/docs/faq.md index f070ed59a575b..b5bf65eb52b0d 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -26,24 +26,24 @@ See [Installation](http://docs.tvm.ai/install/) TVM's relation to Other IR/DSL Projects --------------------------------------- There are usually two levels of abstractions of IR in the deep learning systems. -TensorFlow's XLA and Intel's ngraph uses computation graph representation. +TensorFlow's XLA and Intel's ngraph both use a computation graph representation. This representation is high level, and can be helpful to perform generic optimizations such as memory reuse, layout transformation and automatic differentiation. -TVM adopts a low level representation, that explicitly express the choice of memory +TVM adopts a low-level representation, that explicitly express the choice of memory layout, parallelization pattern, locality and hardware primitives etc. This level of IR is closer to directly target hardwares. -The low level IR adopt ideas from existing image processing languages like Halide, darkroom -and loop transformation tools like loopy and polyhedra based analysis. -We specifically focus of expressing deep learning workloads(e.g. recurrence), +The low-level IR adopts ideas from existing image processing languages like Halide, darkroom +and loop transformation tools like loopy and polyhedra-based analysis. +We specifically focus on expressing deep learning workloads (e.g. recurrence), optimization for different hardware backends and embedding with frameworks to provide end-to-end compilation stack. -TVM's relation to libDNN cuDNN +TVM's relation to libDNN, cuDNN ------------------------------ -TVM can incorporate these library as external calls. One goal of TVM is to be able to -generate high performing kernels. We will evolve TVM an incremental manner as -we learn from the technics of manual kernel crafting and add these as primitives in DSL. +TVM can incorporate these libraries as external calls. One goal of TVM is to be able to +generate high-performing kernels. We will evolve TVM an incremental manner as +we learn from the techniques of manual kernel crafting and add these as primitives in DSL. See also [TVM Operator Inventory](https://github.com/apache/incubator-tvm/tree/master/topi) for recipes of operators in TVM.