Skip to content

Commit

Permalink
[DOC] Various documentation improvements (apache#3133)
Browse files Browse the repository at this point in the history
  • Loading branch information
mshawcroft authored and wweic committed May 13, 2019
1 parent 1245fb7 commit b62a33a
Show file tree
Hide file tree
Showing 22 changed files with 60 additions and 60 deletions.
4 changes: 2 additions & 2 deletions docs/contribute/code_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
Code Guide and Tips
===================

This is a document used to record tips in tvm codebase for reviewers and contributors.
This is a document used to record tips in TVM codebase for reviewers and contributors.
Most of them are summarized through lessons during the contributing and process.


Expand All @@ -42,7 +42,7 @@ Python Code Styles

Handle Integer Constant Expression
----------------------------------
We often need to handle constant integer expressions in tvm. Before we do so, the first question we want to ask is that is it really necessary to get a constant integer. If symbolic expression also works and let the logic flow, we should use symbolic expression as much as possible. So the generated code works for shapes that are not known ahead of time.
We often need to handle constant integer expressions in TVM. Before we do so, the first question we want to ask is that is it really necessary to get a constant integer. If symbolic expression also works and let the logic flow, we should use symbolic expression as much as possible. So the generated code works for shapes that are not known ahead of time.

Note that in some cases we cannot know certain information, e.g. sign of symbolic variable, it is ok to make assumptions in certain cases. While adding precise support if the variable is constant.

Expand Down
4 changes: 2 additions & 2 deletions docs/install/docker.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,13 @@

Docker Images
=============
We provide several prebuilt docker images to quickly try out tvm.
We provide several prebuilt docker images to quickly try out TVM.
These images are also helpful run through TVM demo and tutorials.
You can get the docker images via the following steps.
We need `docker <https://docs.docker.com/engine/installation/>`_ and
`nvidia-docker <https://github.com/NVIDIA/nvidia-docker/>`_ if we want to use cuda.

First, clone tvm repo to get the auxiliary scripts
First, clone TVM repo to get the auxiliary scripts

.. code:: bash
Expand Down
10 changes: 5 additions & 5 deletions docs/install/from_source.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,13 @@

Install from Source
===================
This page gives instructions on how to build and install the tvm package from
This page gives instructions on how to build and install the TVM package from
scratch on various systems. It consists of two steps:

1. First build the shared library from the C++ codes (`libtvm.so` for linux, `libtvm.dylib` for macOS and `libtvm.dll` for windows).
2. Setup for the language packages (e.g. Python Package).

To get started, clone tvm repo from github. It is important to clone the submodules along, with ``--recursive`` option.
To get started, clone TVM repo from github. It is important to clone the submodules along, with ``--recursive`` option.

.. code:: bash
Expand Down Expand Up @@ -63,7 +63,7 @@ The minimal building requirements are
- If you want to use the NNVM compiler, then LLVM is required

We use cmake to build the library.
The configuration of tvm can be modified by `config.cmake`.
The configuration of TVM can be modified by `config.cmake`.


- First, check the cmake in your system. If you do not have cmake,
Expand Down Expand Up @@ -111,7 +111,7 @@ Building on Windows

TVM support build via MSVC using cmake. The minimum required VS version is **Visual Studio Community 2015 Update 3**.
In order to generate the VS solution file using cmake,
make sure you have a recent version of cmake added to your path and then from the tvm directory:
make sure you have a recent version of cmake added to your path and then from the TVM directory:

.. code:: bash
Expand Down Expand Up @@ -159,7 +159,7 @@ Method 1
Method 2
Install tvm python bindings by `setup.py`:
Install TVM python bindings by `setup.py`:

.. code:: bash
Expand Down
2 changes: 1 addition & 1 deletion docs/install/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Installation
============
To install TVM, please read :ref:`install-from-source`.
If you are interested in deploying to mobile/embedded devices,
you do not need to install the entire tvm stack on your device,
you do not need to install the entire TVM stack on your device,
instead, you only need the runtime, please read :ref:`deploy-and-integration`.
If you would like to quickly try out TVM or do demo/tutorials, checkout :ref:`docker-images`

Expand Down
10 changes: 5 additions & 5 deletions include/tvm/schedule.h
Original file line number Diff line number Diff line change
Expand Up @@ -94,10 +94,10 @@ class Stage : public NodeRef {
*/
EXPORT Stage& compute_root(); // NOLINT(*)
/*!
* \brief Bind the ivar to thread index.
* \brief Bind the IterVar to thread index.
*
* \param ivar The IterVar to be binded.
* \param thread_ivar The thread axis to be binded.
* \param ivar The IterVar to be bound.
* \param thread_ivar The thread axis to be bound.
* \return reference to self.
*/
EXPORT Stage& bind(IterVar ivar, IterVar thread_ivar);
Expand All @@ -107,7 +107,7 @@ class Stage : public NodeRef {
* need one of them to do the store.
*
* \note This is a dangerous scheduling primitive that can change behavior of program.
* Only do when we are certain that thare are duplicated store.
* Only do when we are certain that thare are duplicated stores.
* \param predicate The condition to be checked.
* \return reference to self.
*/
Expand Down Expand Up @@ -155,7 +155,7 @@ class Stage : public NodeRef {
* \param p_target The result target domain.
*
* \note axes can be an empty array,
* in that case, a singleton itervar is created and
* in that case, a singleton IterVar is created and
* inserted to the outermost loop.
* The fuse of empty array is used to support zero-dimension tensors.
*
Expand Down
6 changes: 3 additions & 3 deletions python/tvm/tensor.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ def op(self):

@property
def value_index(self):
"""The output value index the tensor corressponds to."""
"""The output value index the tensor corresponds to."""
return self.__getattr__("value_index")

@property
Expand All @@ -128,7 +128,7 @@ def name(self):


class Operation(NodeBase):
"""Represent an operation that generate a tensor"""
"""Represent an operation that generates a tensor"""

def output(self, index):
"""Get the index-th output of the operation
Expand Down Expand Up @@ -197,7 +197,7 @@ def scan_axis(self):

@register_node
class ExternOp(Operation):
"""Extern operation."""
"""External operation."""


@register_node
Expand Down
2 changes: 1 addition & 1 deletion tutorials/autotvm/tune_conv2d_cuda.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
#
# pip3 install --user psutil xgboost tornado
#
# To make tvm run faster in tuning, it is recommended to use cython
# To make TVM run faster in tuning, it is recommended to use cython
# as FFI of tvm. In the root directory of tvm, execute
#
# .. code-block:: bash
Expand Down
12 changes: 6 additions & 6 deletions tutorials/autotvm/tune_relay_arm.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
The template has many tunable knobs (tile factor, vectorization, unrolling, etc).
We will tune all convolution and depthwise convolution operators
in the neural network. After tuning, we produce a log file which stores
the best knob values for all required operators. When the tvm compiler compiles
the best knob values for all required operators. When the TVM compiler compiles
these operators, it will query this log file to get the best knob values.
We also released pre-tuned parameters for some arm devices. You can go to
Expand All @@ -45,8 +45,8 @@
#
# pip3 install --user psutil xgboost tornado
#
# To make tvm run faster during tuning, it is recommended to use cython
# as FFI of tvm. In the root directory of tvm, execute
# To make TVM run faster during tuning, it is recommended to use cython
# as FFI of TVM. In the root directory of TVM, execute
# (change "3" to "2" if you use python2):
#
# .. code-block:: bash
Expand Down Expand Up @@ -134,11 +134,11 @@ def get_network(name, batch_size):
# Register devices to RPC Tracker
# -----------------------------------
# Now we can register our devices to the tracker. The first step is to
# build tvm runtime for the ARM devices.
# build the TVM runtime for the ARM devices.
#
# * For Linux:
# Follow this section :ref:`build-tvm-runtime-on-device` to build
# tvm runtime on the device. Then register the device to tracker by
# the TVM runtime on the device. Then register the device to tracker by
#
# .. code-block:: bash
#
Expand All @@ -148,7 +148,7 @@ def get_network(name, batch_size):
#
# * For Android:
# Follow this `readme page <https://github.com/dmlc/tvm/tree/master/apps/android_rpc>`_ to
# install tvm rpc apk on the android device. Make sure you can pass the android rpc test.
# install the TVM RPC APK on the android device. Make sure you can pass the android rpc test.
# Then you have already registred your device. During tuning, you have to go to developer option
# and enable "Keep screen awake during changing" and charge your phone to make it stable.
#
Expand Down
4 changes: 2 additions & 2 deletions tutorials/autotvm/tune_relay_cuda.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
The template has many tunable knobs (tile factor, unrolling, etc).
We will tune all convolution and depthwise convolution operators
in the neural network. After tuning, we produce a log file which stores
the best knob values for all required operators. When the tvm compiler compiles
the best knob values for all required operators. When the TVM compiler compiles
these operators, it will query this log file to get the best knob values.
We also released pre-tuned parameters for some NVIDIA GPUs. You can go to
Expand All @@ -45,7 +45,7 @@
#
# pip3 install --user psutil xgboost tornado
#
# To make tvm run faster during tuning, it is recommended to use cython
# To make TVM run faster during tuning, it is recommended to use cython
# as FFI of tvm. In the root directory of tvm, execute:
#
# .. code-block:: bash
Expand Down
10 changes: 5 additions & 5 deletions tutorials/autotvm/tune_relay_mobile_gpu.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
The template has many tunable knobs (tile factor, vectorization, unrolling, etc).
We will tune all convolution, depthwise convolution and dense operators
in the neural network. After tuning, we produce a log file which stores
the best knob values for all required operators. When the tvm compiler compiles
the best knob values for all required operators. When the TVM compiler compiles
these operators, it will query this log file to get the best knob values.
We also released pre-tuned parameters for some arm devices. You can go to
Expand All @@ -45,7 +45,7 @@
#
# pip3 install --user psutil xgboost tornado
#
# To make tvm run faster during tuning, it is recommended to use cython
# To make TVM run faster during tuning, it is recommended to use cython
# as FFI of tvm. In the root directory of tvm, execute
# (change "3" to "2" if you use python2):
#
Expand Down Expand Up @@ -135,11 +135,11 @@ def get_network(name, batch_size):
# Register devices to RPC Tracker
# -----------------------------------
# Now we can register our devices to the tracker. The first step is to
# build tvm runtime for the ARM devices.
# build the TVM runtime for the ARM devices.
#
# * For Linux:
# Follow this section :ref:`build-tvm-runtime-on-device` to build
# tvm runtime on the device. Then register the device to tracker by
# the TVM runtime on the device. Then register the device to tracker by
#
# .. code-block:: bash
#
Expand All @@ -149,7 +149,7 @@ def get_network(name, batch_size):
#
# * For Android:
# Follow this `readme page <https://github.com/dmlc/tvm/tree/master/apps/android_rpc>`_ to
# install tvm rpc apk on the android device. Make sure you can pass the android rpc test.
# install TVM RPC APK on the android device. Make sure you can pass the android RPC test.
# Then you have already registred your device. During tuning, you have to go to developer option
# and enable "Keep screen awake during changing" and charge your phone to make it stable.
#
Expand Down
8 changes: 4 additions & 4 deletions tutorials/autotvm/tune_relay_x86.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
**Author**: `Yao Wang <https://github.com/kevinthesun>`_, `Eddie Yan <https://github.com/eqy>`_
This is a tutorial about how to tune convolution neural network
for x86 cpu.
for x86 CPU.
"""
import os
import numpy as np
Expand Down Expand Up @@ -70,7 +70,7 @@ def get_network(name, batch_size):

return net, params, input_shape, output_shape

# Replace "llvm" with the correct target of your cpu.
# Replace "llvm" with the correct target of your CPU.
# For example, for AWS EC2 c5 instance with Intel Xeon
# Platinum 8000 series, the target should be "llvm -mcpu=skylake-avx512".
# For AWS EC2 c4 instance with Intel Xeon E5-2666 v3, it should be
Expand All @@ -83,15 +83,15 @@ def get_network(name, batch_size):
log_file = "%s.log" % model_name

# Set number of threads used for tuning based on the number of
# physical cpu cores on your machine.
# physical CPU cores on your machine.
num_threads = 1
os.environ["TVM_NUM_THREADS"] = str(num_threads)


#################################################################
# Configure tensor tuning settings and create tasks
# -------------------------------------------------
# To get better kernel execution performance on x86 cpu,
# To get better kernel execution performance on x86 CPU,
# we need to change data layout of convolution kernel from
# "NCHW" to "NCHWc". To deal with this situation, we define
# conv2d_NCHWc operator in topi. We will tune this operator
Expand Down
8 changes: 4 additions & 4 deletions tutorials/autotvm/tune_simple_template.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,8 @@
#
# pip3 install --user psutil xgboost
#
# To make tvm run faster in tuning, it is recommended to use cython
# as FFI of tvm. In the root directory of tvm, execute
# To make TVM run faster in tuning, it is recommended to use cython
# as FFI of TVM. In the root directory of TVM, execute
# (change "3" to "2" if you use python2):
#
# .. code-block:: bash
Expand All @@ -61,7 +61,7 @@
######################################################################
# Step 1: Define the search space
# --------------------------------
# In this section, we will rewrite a deterministic tvm schedule code to a
# In this section, we will rewrite a deterministic TVM schedule code to a
# tunable schedule template. You can regard the process of search space definition
# as the parameterization of our existing schedule code.
#
Expand Down Expand Up @@ -288,7 +288,7 @@ def matmul(N, L, M, dtype):
logging.getLogger('autotvm').addHandler(logging.StreamHandler(sys.stdout))

# There are two steps for measuring a config: build and run.
# By default, we use all cpu cores to compile program. Then measure them sequentially.
# By default, we use all CPU cores to compile program. Then measure them sequentially.
# We measure 5 times and take average to reduce variance.
measure_option = autotvm.measure_option(
builder='local',
Expand Down
6 changes: 3 additions & 3 deletions tutorials/cross_compilation_and_rpc.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,16 +35,16 @@
# Build TVM Runtime on Device
# ---------------------------
#
# The first step is to build tvm runtime on the remote device.
# The first step is to build the TVM runtime on the remote device.
#
# .. note::
#
# All instructions in both this section and next section should be
# executed on the target device, e.g. Raspberry Pi. And we assume it
# has Linux running.
#
# Since we do compilation on local machine, the remote device is only used
# for running the generated code. We only need to build tvm runtime on
# Since we do compilation on the local machine, the remote device is only used
# for running the generated code. We only need to build the TVM runtime on
# the remote device.
#
# .. code-block:: bash
Expand Down
6 changes: 3 additions & 3 deletions tutorials/frontend/deploy_model_on_android.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@
# docker run --pid=host -h tvm -v $PWD:/workspace \
# -w /workspace -p 9190:9190 --name tvm -it tvm.demo_android bash
#
# You are now inside the container. The cloned tvm directory is mounted on /workspace.
# You are now inside the container. The cloned TVM directory is mounted on /workspace.
# At this time, mount the 9190 port used by RPC described later.
#
# .. note::
Expand All @@ -74,7 +74,7 @@
# ..
# make -j10
#
# After building tvm successfully, Please set PYTHONPATH.
# After building TVM successfully, Please set PYTHONPATH.
#
# .. code-block:: bash
#
Expand Down Expand Up @@ -106,7 +106,7 @@
# Now we can register our Android device to the tracker.
#
# Follow this `readme page <https://github.com/dmlc/tvm/tree/master/apps/android_rpc>`_ to
# install tvm rpc apk on the android device.
# install TVM RPC APK on the android device.
#
# Here is an example of config.mk. I enabled OpenCL and Vulkan.
#
Expand Down
2 changes: 1 addition & 1 deletion tutorials/frontend/deploy_model_on_rasp.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
# Build TVM Runtime on Device
# ---------------------------
#
# The first step is to build tvm runtime on the remote device.
# The first step is to build the TVM runtime on the remote device.
#
# .. note::
#
Expand Down
2 changes: 1 addition & 1 deletion tutorials/frontend/deploy_ssd_gluoncv.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@
# To get best inference performance on CPU, change
# target argument according to your device and
# follow the :ref:`tune_relay_x86` to tune x86 CPU and
# :ref:`tune_relay_arm` for arm cpu.
# :ref:`tune_relay_arm` for arm CPU.
#
# To get best performance fo SSD on Intel graphics,
# change target argument to 'opencl -device=intel_graphics'
Expand Down
Loading

0 comments on commit b62a33a

Please sign in to comment.