From 8e077461029fc6926cb2c83129de617c5592f3df Mon Sep 17 00:00:00 2001 From: Marcus Shawcroft Date: Thu, 2 May 2019 17:11:37 +0100 Subject: [PATCH] [DOC] Various documentation improvements (#3133) --- docs/contribute/code_guide.rst | 4 ++-- docs/install/docker.rst | 4 ++-- docs/install/from_source.rst | 10 +++++----- docs/install/index.rst | 2 +- include/tvm/schedule.h | 10 +++++----- python/tvm/tensor.py | 6 +++--- tutorials/autotvm/tune_conv2d_cuda.py | 2 +- tutorials/autotvm/tune_relay_arm.py | 12 ++++++------ tutorials/autotvm/tune_relay_cuda.py | 4 ++-- tutorials/autotvm/tune_relay_mobile_gpu.py | 10 +++++----- tutorials/autotvm/tune_relay_x86.py | 8 ++++---- tutorials/autotvm/tune_simple_template.py | 8 ++++---- tutorials/cross_compilation_and_rpc.py | 6 +++--- tutorials/frontend/deploy_model_on_android.py | 6 +++--- tutorials/frontend/deploy_model_on_rasp.py | 2 +- tutorials/frontend/deploy_ssd_gluoncv.py | 2 +- tutorials/frontend/from_caffe2.py | 4 ++-- tutorials/frontend/from_tensorflow.py | 2 +- tutorials/frontend/from_tflite.py | 2 +- tutorials/language/extern_op.py | 6 +++--- tutorials/language/scan.py | 2 +- tutorials/tensor_expr_get_started.py | 8 ++++---- 22 files changed, 60 insertions(+), 60 deletions(-) diff --git a/docs/contribute/code_guide.rst b/docs/contribute/code_guide.rst index f9bd61c375d3..6a426b8277a0 100644 --- a/docs/contribute/code_guide.rst +++ b/docs/contribute/code_guide.rst @@ -20,7 +20,7 @@ Code Guide and Tips =================== -This is a document used to record tips in tvm codebase for reviewers and contributors. +This is a document used to record tips in TVM codebase for reviewers and contributors. Most of them are summarized through lessons during the contributing and process. @@ -42,7 +42,7 @@ Python Code Styles Handle Integer Constant Expression ---------------------------------- -We often need to handle constant integer expressions in tvm. Before we do so, the first question we want to ask is that is it really necessary to get a constant integer. If symbolic expression also works and let the logic flow, we should use symbolic expression as much as possible. So the generated code works for shapes that are not known ahead of time. +We often need to handle constant integer expressions in TVM. Before we do so, the first question we want to ask is that is it really necessary to get a constant integer. If symbolic expression also works and let the logic flow, we should use symbolic expression as much as possible. So the generated code works for shapes that are not known ahead of time. Note that in some cases we cannot know certain information, e.g. sign of symbolic variable, it is ok to make assumptions in certain cases. While adding precise support if the variable is constant. diff --git a/docs/install/docker.rst b/docs/install/docker.rst index f4236d7a29cd..eb7331c0a1b7 100644 --- a/docs/install/docker.rst +++ b/docs/install/docker.rst @@ -19,13 +19,13 @@ Docker Images ============= -We provide several prebuilt docker images to quickly try out tvm. +We provide several prebuilt docker images to quickly try out TVM. These images are also helpful run through TVM demo and tutorials. You can get the docker images via the following steps. We need `docker `_ and `nvidia-docker `_ if we want to use cuda. -First, clone tvm repo to get the auxiliary scripts +First, clone TVM repo to get the auxiliary scripts .. code:: bash diff --git a/docs/install/from_source.rst b/docs/install/from_source.rst index 62f669ec77b4..3a769dee2dce 100644 --- a/docs/install/from_source.rst +++ b/docs/install/from_source.rst @@ -19,13 +19,13 @@ Install from Source =================== -This page gives instructions on how to build and install the tvm package from +This page gives instructions on how to build and install the TVM package from scratch on various systems. It consists of two steps: 1. First build the shared library from the C++ codes (`libtvm.so` for linux, `libtvm.dylib` for macOS and `libtvm.dll` for windows). 2. Setup for the language packages (e.g. Python Package). -To get started, clone tvm repo from github. It is important to clone the submodules along, with ``--recursive`` option. +To get started, clone TVM repo from github. It is important to clone the submodules along, with ``--recursive`` option. .. code:: bash @@ -63,7 +63,7 @@ The minimal building requirements are - If you want to use the NNVM compiler, then LLVM is required We use cmake to build the library. -The configuration of tvm can be modified by `config.cmake`. +The configuration of TVM can be modified by `config.cmake`. - First, check the cmake in your system. If you do not have cmake, @@ -111,7 +111,7 @@ Building on Windows TVM support build via MSVC using cmake. The minimum required VS version is **Visual Studio Community 2015 Update 3**. In order to generate the VS solution file using cmake, -make sure you have a recent version of cmake added to your path and then from the tvm directory: +make sure you have a recent version of cmake added to your path and then from the TVM directory: .. code:: bash @@ -159,7 +159,7 @@ Method 1 Method 2 - Install tvm python bindings by `setup.py`: + Install TVM python bindings by `setup.py`: .. code:: bash diff --git a/docs/install/index.rst b/docs/install/index.rst index 560811b5f78e..f1caec14e68b 100644 --- a/docs/install/index.rst +++ b/docs/install/index.rst @@ -19,7 +19,7 @@ Installation ============ To install TVM, please read :ref:`install-from-source`. If you are interested in deploying to mobile/embedded devices, -you do not need to install the entire tvm stack on your device, +you do not need to install the entire TVM stack on your device, instead, you only need the runtime, please read :ref:`deploy-and-integration`. If you would like to quickly try out TVM or do demo/tutorials, checkout :ref:`docker-images` diff --git a/include/tvm/schedule.h b/include/tvm/schedule.h index 6c2a759db471..774d7cd9a40a 100644 --- a/include/tvm/schedule.h +++ b/include/tvm/schedule.h @@ -94,10 +94,10 @@ class Stage : public NodeRef { */ EXPORT Stage& compute_root(); // NOLINT(*) /*! - * \brief Bind the ivar to thread index. + * \brief Bind the IterVar to thread index. * - * \param ivar The IterVar to be binded. - * \param thread_ivar The thread axis to be binded. + * \param ivar The IterVar to be bound. + * \param thread_ivar The thread axis to be bound. * \return reference to self. */ EXPORT Stage& bind(IterVar ivar, IterVar thread_ivar); @@ -107,7 +107,7 @@ class Stage : public NodeRef { * need one of them to do the store. * * \note This is a dangerous scheduling primitive that can change behavior of program. - * Only do when we are certain that thare are duplicated store. + * Only do when we are certain that thare are duplicated stores. * \param predicate The condition to be checked. * \return reference to self. */ @@ -155,7 +155,7 @@ class Stage : public NodeRef { * \param p_target The result target domain. * * \note axes can be an empty array, - * in that case, a singleton itervar is created and + * in that case, a singleton IterVar is created and * inserted to the outermost loop. * The fuse of empty array is used to support zero-dimension tensors. * diff --git a/python/tvm/tensor.py b/python/tvm/tensor.py index 1e297a471863..ce7cbae385d9 100644 --- a/python/tvm/tensor.py +++ b/python/tvm/tensor.py @@ -110,7 +110,7 @@ def op(self): @property def value_index(self): - """The output value index the tensor corressponds to.""" + """The output value index the tensor corresponds to.""" return self.__getattr__("value_index") @property @@ -128,7 +128,7 @@ def name(self): class Operation(NodeBase): - """Represent an operation that generate a tensor""" + """Represent an operation that generates a tensor""" def output(self, index): """Get the index-th output of the operation @@ -197,7 +197,7 @@ def scan_axis(self): @register_node class ExternOp(Operation): - """Extern operation.""" + """External operation.""" @register_node diff --git a/tutorials/autotvm/tune_conv2d_cuda.py b/tutorials/autotvm/tune_conv2d_cuda.py index 7124ad0a8fbb..a367c9925900 100644 --- a/tutorials/autotvm/tune_conv2d_cuda.py +++ b/tutorials/autotvm/tune_conv2d_cuda.py @@ -34,7 +34,7 @@ # # pip3 install --user psutil xgboost tornado # -# To make tvm run faster in tuning, it is recommended to use cython +# To make TVM run faster in tuning, it is recommended to use cython # as FFI of tvm. In the root directory of tvm, execute # # .. code-block:: bash diff --git a/tutorials/autotvm/tune_relay_arm.py b/tutorials/autotvm/tune_relay_arm.py index 0f5ab8237461..2c1dca9921eb 100644 --- a/tutorials/autotvm/tune_relay_arm.py +++ b/tutorials/autotvm/tune_relay_arm.py @@ -27,7 +27,7 @@ The template has many tunable knobs (tile factor, vectorization, unrolling, etc). We will tune all convolution and depthwise convolution operators in the neural network. After tuning, we produce a log file which stores -the best knob values for all required operators. When the tvm compiler compiles +the best knob values for all required operators. When the TVM compiler compiles these operators, it will query this log file to get the best knob values. We also released pre-tuned parameters for some arm devices. You can go to @@ -45,8 +45,8 @@ # # pip3 install --user psutil xgboost tornado # -# To make tvm run faster during tuning, it is recommended to use cython -# as FFI of tvm. In the root directory of tvm, execute +# To make TVM run faster during tuning, it is recommended to use cython +# as FFI of TVM. In the root directory of TVM, execute # (change "3" to "2" if you use python2): # # .. code-block:: bash @@ -134,11 +134,11 @@ def get_network(name, batch_size): # Register devices to RPC Tracker # ----------------------------------- # Now we can register our devices to the tracker. The first step is to -# build tvm runtime for the ARM devices. +# build the TVM runtime for the ARM devices. # # * For Linux: # Follow this section :ref:`build-tvm-runtime-on-device` to build -# tvm runtime on the device. Then register the device to tracker by +# the TVM runtime on the device. Then register the device to tracker by # # .. code-block:: bash # @@ -148,7 +148,7 @@ def get_network(name, batch_size): # # * For Android: # Follow this `readme page `_ to -# install tvm rpc apk on the android device. Make sure you can pass the android rpc test. +# install the TVM RPC APK on the android device. Make sure you can pass the android rpc test. # Then you have already registred your device. During tuning, you have to go to developer option # and enable "Keep screen awake during changing" and charge your phone to make it stable. # diff --git a/tutorials/autotvm/tune_relay_cuda.py b/tutorials/autotvm/tune_relay_cuda.py index f8ef71996ff4..571334e8c106 100644 --- a/tutorials/autotvm/tune_relay_cuda.py +++ b/tutorials/autotvm/tune_relay_cuda.py @@ -27,7 +27,7 @@ The template has many tunable knobs (tile factor, unrolling, etc). We will tune all convolution and depthwise convolution operators in the neural network. After tuning, we produce a log file which stores -the best knob values for all required operators. When the tvm compiler compiles +the best knob values for all required operators. When the TVM compiler compiles these operators, it will query this log file to get the best knob values. We also released pre-tuned parameters for some NVIDIA GPUs. You can go to @@ -45,7 +45,7 @@ # # pip3 install --user psutil xgboost tornado # -# To make tvm run faster during tuning, it is recommended to use cython +# To make TVM run faster during tuning, it is recommended to use cython # as FFI of tvm. In the root directory of tvm, execute: # # .. code-block:: bash diff --git a/tutorials/autotvm/tune_relay_mobile_gpu.py b/tutorials/autotvm/tune_relay_mobile_gpu.py index 5b231064e2ac..1e4cf6d52ade 100644 --- a/tutorials/autotvm/tune_relay_mobile_gpu.py +++ b/tutorials/autotvm/tune_relay_mobile_gpu.py @@ -27,7 +27,7 @@ The template has many tunable knobs (tile factor, vectorization, unrolling, etc). We will tune all convolution, depthwise convolution and dense operators in the neural network. After tuning, we produce a log file which stores -the best knob values for all required operators. When the tvm compiler compiles +the best knob values for all required operators. When the TVM compiler compiles these operators, it will query this log file to get the best knob values. We also released pre-tuned parameters for some arm devices. You can go to @@ -45,7 +45,7 @@ # # pip3 install --user psutil xgboost tornado # -# To make tvm run faster during tuning, it is recommended to use cython +# To make TVM run faster during tuning, it is recommended to use cython # as FFI of tvm. In the root directory of tvm, execute # (change "3" to "2" if you use python2): # @@ -135,11 +135,11 @@ def get_network(name, batch_size): # Register devices to RPC Tracker # ----------------------------------- # Now we can register our devices to the tracker. The first step is to -# build tvm runtime for the ARM devices. +# build the TVM runtime for the ARM devices. # # * For Linux: # Follow this section :ref:`build-tvm-runtime-on-device` to build -# tvm runtime on the device. Then register the device to tracker by +# the TVM runtime on the device. Then register the device to tracker by # # .. code-block:: bash # @@ -149,7 +149,7 @@ def get_network(name, batch_size): # # * For Android: # Follow this `readme page `_ to -# install tvm rpc apk on the android device. Make sure you can pass the android rpc test. +# install TVM RPC APK on the android device. Make sure you can pass the android RPC test. # Then you have already registred your device. During tuning, you have to go to developer option # and enable "Keep screen awake during changing" and charge your phone to make it stable. # diff --git a/tutorials/autotvm/tune_relay_x86.py b/tutorials/autotvm/tune_relay_x86.py index 0fa4e31f2b19..f100a35e5770 100644 --- a/tutorials/autotvm/tune_relay_x86.py +++ b/tutorials/autotvm/tune_relay_x86.py @@ -20,7 +20,7 @@ **Author**: `Yao Wang `_, `Eddie Yan `_ This is a tutorial about how to tune convolution neural network -for x86 cpu. +for x86 CPU. """ import os import numpy as np @@ -70,7 +70,7 @@ def get_network(name, batch_size): return net, params, input_shape, output_shape -# Replace "llvm" with the correct target of your cpu. +# Replace "llvm" with the correct target of your CPU. # For example, for AWS EC2 c5 instance with Intel Xeon # Platinum 8000 series, the target should be "llvm -mcpu=skylake-avx512". # For AWS EC2 c4 instance with Intel Xeon E5-2666 v3, it should be @@ -83,7 +83,7 @@ def get_network(name, batch_size): log_file = "%s.log" % model_name # Set number of threads used for tuning based on the number of -# physical cpu cores on your machine. +# physical CPU cores on your machine. num_threads = 1 os.environ["TVM_NUM_THREADS"] = str(num_threads) @@ -91,7 +91,7 @@ def get_network(name, batch_size): ################################################################# # Configure tensor tuning settings and create tasks # ------------------------------------------------- -# To get better kernel execution performance on x86 cpu, +# To get better kernel execution performance on x86 CPU, # we need to change data layout of convolution kernel from # "NCHW" to "NCHWc". To deal with this situation, we define # conv2d_NCHWc operator in topi. We will tune this operator diff --git a/tutorials/autotvm/tune_simple_template.py b/tutorials/autotvm/tune_simple_template.py index 45f95947341f..c7eea7f42c0b 100644 --- a/tutorials/autotvm/tune_simple_template.py +++ b/tutorials/autotvm/tune_simple_template.py @@ -38,8 +38,8 @@ # # pip3 install --user psutil xgboost # -# To make tvm run faster in tuning, it is recommended to use cython -# as FFI of tvm. In the root directory of tvm, execute +# To make TVM run faster in tuning, it is recommended to use cython +# as FFI of TVM. In the root directory of TVM, execute # (change "3" to "2" if you use python2): # # .. code-block:: bash @@ -61,7 +61,7 @@ ###################################################################### # Step 1: Define the search space # -------------------------------- -# In this section, we will rewrite a deterministic tvm schedule code to a +# In this section, we will rewrite a deterministic TVM schedule code to a # tunable schedule template. You can regard the process of search space definition # as the parameterization of our existing schedule code. # @@ -288,7 +288,7 @@ def matmul(N, L, M, dtype): logging.getLogger('autotvm').addHandler(logging.StreamHandler(sys.stdout)) # There are two steps for measuring a config: build and run. -# By default, we use all cpu cores to compile program. Then measure them sequentially. +# By default, we use all CPU cores to compile program. Then measure them sequentially. # We measure 5 times and take average to reduce variance. measure_option = autotvm.measure_option( builder='local', diff --git a/tutorials/cross_compilation_and_rpc.py b/tutorials/cross_compilation_and_rpc.py index 1872b7dafe74..ea1b88cbf96a 100644 --- a/tutorials/cross_compilation_and_rpc.py +++ b/tutorials/cross_compilation_and_rpc.py @@ -35,7 +35,7 @@ # Build TVM Runtime on Device # --------------------------- # -# The first step is to build tvm runtime on the remote device. +# The first step is to build the TVM runtime on the remote device. # # .. note:: # @@ -43,8 +43,8 @@ # executed on the target device, e.g. Raspberry Pi. And we assume it # has Linux running. # -# Since we do compilation on local machine, the remote device is only used -# for running the generated code. We only need to build tvm runtime on +# Since we do compilation on the local machine, the remote device is only used +# for running the generated code. We only need to build the TVM runtime on # the remote device. # # .. code-block:: bash diff --git a/tutorials/frontend/deploy_model_on_android.py b/tutorials/frontend/deploy_model_on_android.py index 6985e3ad793d..a3ea8651b110 100644 --- a/tutorials/frontend/deploy_model_on_android.py +++ b/tutorials/frontend/deploy_model_on_android.py @@ -52,7 +52,7 @@ # docker run --pid=host -h tvm -v $PWD:/workspace \ # -w /workspace -p 9190:9190 --name tvm -it tvm.demo_android bash # -# You are now inside the container. The cloned tvm directory is mounted on /workspace. +# You are now inside the container. The cloned TVM directory is mounted on /workspace. # At this time, mount the 9190 port used by RPC described later. # # .. note:: @@ -74,7 +74,7 @@ # .. # make -j10 # -# After building tvm successfully, Please set PYTHONPATH. +# After building TVM successfully, Please set PYTHONPATH. # # .. code-block:: bash # @@ -106,7 +106,7 @@ # Now we can register our Android device to the tracker. # # Follow this `readme page `_ to -# install tvm rpc apk on the android device. +# install TVM RPC APK on the android device. # # Here is an example of config.mk. I enabled OpenCL and Vulkan. # diff --git a/tutorials/frontend/deploy_model_on_rasp.py b/tutorials/frontend/deploy_model_on_rasp.py index 8015b0b1c89e..c471e8228840 100644 --- a/tutorials/frontend/deploy_model_on_rasp.py +++ b/tutorials/frontend/deploy_model_on_rasp.py @@ -38,7 +38,7 @@ # Build TVM Runtime on Device # --------------------------- # -# The first step is to build tvm runtime on the remote device. +# The first step is to build the TVM runtime on the remote device. # # .. note:: # diff --git a/tutorials/frontend/deploy_ssd_gluoncv.py b/tutorials/frontend/deploy_ssd_gluoncv.py index ff7691c7bf55..f536679183c8 100644 --- a/tutorials/frontend/deploy_ssd_gluoncv.py +++ b/tutorials/frontend/deploy_ssd_gluoncv.py @@ -43,7 +43,7 @@ # To get best inference performance on CPU, change # target argument according to your device and # follow the :ref:`tune_relay_x86` to tune x86 CPU and -# :ref:`tune_relay_arm` for arm cpu. +# :ref:`tune_relay_arm` for arm CPU. # # To get best performance fo SSD on Intel graphics, # change target argument to 'opencl -device=intel_graphics' diff --git a/tutorials/frontend/from_caffe2.py b/tutorials/frontend/from_caffe2.py index 8185767cb038..ceec8c0ad119 100644 --- a/tutorials/frontend/from_caffe2.py +++ b/tutorials/frontend/from_caffe2.py @@ -86,7 +86,7 @@ def transform_image(image): func, params = relay.frontend.from_caffe2(resnet50.init_net, resnet50.predict_net, shape_dict, dtype_dict) # compile the model -# target x86 cpu +# target x86 CPU target = 'llvm' with relay.build_config(opt_level=3): graph, lib, params = relay.build(func, target, params=params) @@ -97,7 +97,7 @@ def transform_image(image): # The process is no different from other examples. import tvm from tvm.contrib import graph_runtime -# context x86 cpu, use tvm.gpu(0) if you run on GPU +# context x86 CPU, use tvm.gpu(0) if you run on GPU ctx = tvm.cpu(0) # create a runtime executor module m = graph_runtime.create(graph, lib, ctx) diff --git a/tutorials/frontend/from_tensorflow.py b/tutorials/frontend/from_tensorflow.py index 58f63a0b7e78..8d402820377e 100644 --- a/tutorials/frontend/from_tensorflow.py +++ b/tutorials/frontend/from_tensorflow.py @@ -135,7 +135,7 @@ # Results: # graph: Final graph after compilation. # params: final params after compilation. -# lib: target library which can be deployed on target with tvm runtime. +# lib: target library which can be deployed on target with TVM runtime. with relay.build_config(opt_level=3): graph, lib, params = relay.build(sym, target=target, target_host=target_host, params=params) diff --git a/tutorials/frontend/from_tflite.py b/tutorials/frontend/from_tflite.py index 52ecb65b3689..67edeb8a38de 100644 --- a/tutorials/frontend/from_tflite.py +++ b/tutorials/frontend/from_tflite.py @@ -151,7 +151,7 @@ def extract(path): shape_dict={input_tensor: input_shape}, dtype_dict={input_tensor: input_dtype}) -# targt x86 cpu +# target x86 CPU target = "llvm" with relay.build_module.build_config(opt_level=3): graph, lib, params = relay.build(func, target, params=params) diff --git a/tutorials/language/extern_op.py b/tutorials/language/extern_op.py index 071968ce2b1f..2ad3e3063415 100644 --- a/tutorials/language/extern_op.py +++ b/tutorials/language/extern_op.py @@ -25,7 +25,7 @@ some of the convolution kernels and define the rest of the stages. TVM supports these black box function calls natively. -Specfically, tvm support all the tensor functions that are DLPack compatible. +Specfically, TVM support all the tensor functions that are DLPack compatible. Which means we can call any function with POD types(pointer, int, float) or pointer to DLTensor as argument. """ @@ -46,7 +46,7 @@ # The compute function takes list of symbolic placeholder for the inputs, # list of symbolic placeholder for the outputs and returns the executing statement. # -# In this case we simply call a registered tvm function, which invokes a CBLAS call. +# In this case we simply call a registered TVM function, which invokes a CBLAS call. # TVM does not control internal of the extern array function and treats it as blackbox. # We can further mix schedulable TVM calls that add a bias term to the result. # @@ -95,7 +95,7 @@ # Since we can call into any PackedFunc in TVM. We can use the extern # function to callback into python. # -# The following example registers a python function into tvm runtime system +# The following example registers a python function into TVM runtime system # and use it to complete one stage of the computation. # This makes TVM much more flexible. For example, we can insert front-end # callbacks to inspect the intermediate results or mix customized code diff --git a/tutorials/language/scan.py b/tutorials/language/scan.py index be637fba0f70..2fa9c210ead2 100644 --- a/tutorials/language/scan.py +++ b/tutorials/language/scan.py @@ -77,7 +77,7 @@ ###################################################################### # Build and Verify # ---------------- -# We can build the scan kernel like other tvm kernels, here we use +# We can build the scan kernel like other TVM kernels, here we use # numpy to verify the correctness of the result. # fscan = tvm.build(s, [X, s_scan], "cuda", name="myscan") diff --git a/tutorials/tensor_expr_get_started.py b/tutorials/tensor_expr_get_started.py index b066fbad57c6..cdd07d466a37 100644 --- a/tutorials/tensor_expr_get_started.py +++ b/tutorials/tensor_expr_get_started.py @@ -143,10 +143,10 @@ # We provide an minimum array API in python to aid quick testing and prototyping. # The array API is based on `DLPack `_ standard. # -# - We first create a gpu context. -# - Then tvm.nd.array copies the data to gpu. +# - We first create a GPU context. +# - Then tvm.nd.array copies the data to GPU. # - fadd runs the actual computation. -# - asnumpy() copies the gpu array back to cpu and we can use this to verify correctness +# - asnumpy() copies the GPU array back to CPU and we can use this to verify correctness # ctx = tvm.context(tgt, 0) @@ -161,7 +161,7 @@ # Inspect the Generated Code # -------------------------- # You can inspect the generated code in TVM. The result of tvm.build -# is a tvm Module. fadd is the host module that contains the host wrapper, +# is a TVM Module. fadd is the host module that contains the host wrapper, # it also contains a device module for the CUDA (GPU) function. # # The following code fetches the device module and prints the content code.