Skip to content

Commit

Permalink
docs: minor spelling tweaks (#4027)
Browse files Browse the repository at this point in the history
  • Loading branch information
brettkoonce authored and tqchen committed Sep 27, 2019
1 parent 368a4ae commit 18188f4
Show file tree
Hide file tree
Showing 14 changed files with 22 additions and 22 deletions.
2 changes: 1 addition & 1 deletion docs/contribute/code_review.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Perform Code Reviews

This is a general guideline for code reviewers. First of all, while it is great to add new features to a project, we must also be aware that each line of code we introduce also brings **technical debt** that we may have to eventually pay.

Open source code is maintained by a community with diverse backend, and it is even more important to bring clear, documented and maintainable code. Code reviews are shepherding process to spot potential problems, improve quality of the code. We should, however, not rely on code review process to get the code into a ready state. Contributors are encouraged to polish the code to a ready state before requesting reviews. This is especially expected for code owner and comitter candidates.
Open source code is maintained by a community with diverse backend, and it is even more important to bring clear, documented and maintainable code. Code reviews are shepherding process to spot potential problems, improve quality of the code. We should, however, not rely on code review process to get the code into a ready state. Contributors are encouraged to polish the code to a ready state before requesting reviews. This is especially expected for code owner and committer candidates.

Here are some checklists for code reviews, it is also helpful reference for contributors

Expand Down
2 changes: 1 addition & 1 deletion docs/contribute/error_handling.rst
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ error messages when necessary.
def preferred():
# Very clear about what is being raised and what is the error message.
raise OpNotImplemented("Operator relu is not implemented in the MXNet fronend")
raise OpNotImplemented("Operator relu is not implemented in the MXNet frontend")
def _op_not_implemented(op_name):
return OpNotImplemented("Operator {} is not implemented.").format(op_name)
Expand Down
2 changes: 1 addition & 1 deletion docs/contribute/pull_request.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ We use docker container to create stable CI environments
that can be deployed to multiple machines.
You can find the prebuilt images in `<https://hub.docker.com/r/tvmai/>`_ .
Because we want a relatively stable CI environment and make use of pre-cached image,
all of the CI images are built and maintained by comitters.
all of the CI images are built and maintained by committers.

Upgrade of CI images can cause problems and need fixes to accommodate the new env.
Here is the protocol to update CI image:
Expand Down
4 changes: 2 additions & 2 deletions docs/deploy/android.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@

NNVM compilation of model for android target could follow same approach like android_rpc.

An reference exampe can be found at [chainer-nnvm-example](https://github.com/tkat0/chainer-nnvm-example)
An reference example can be found at [chainer-nnvm-example](https://github.com/tkat0/chainer-nnvm-example)

Above example will directly run the compiled model on RPC target. Below modification at [rum_mobile.py](https://github.com/tkat0/chainer-nnvm-example/blob/5b97fd4d41aa4dde4b0aceb0be311054fb5de451/run_mobile.py#L64) will save the compilation output which is required on android target.

Expand All @@ -39,4 +39,4 @@ deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target.
## TVM Runtime for Android Target

Refer [here](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/README.md#build-and-installation) to build CPU/OpenCL version flavor TVM runtime for android target.
From android java TVM API to load model & execute can be refered at this [java](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java) sample source.
From android java TVM API to load model & execute can be referred at this [java](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java) sample source.
2 changes: 1 addition & 1 deletion docs/dev/hybrid_script.rst
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ In HalideIR, loops have in total 4 types: ``serial``, ``unrolled``, ``parallel``
Variables
~~~~~~~~~

Because there is no variables in ``HalideIR``, all the mutatable variables will be lowered to an array with size 1.
Because there is no variables in ``HalideIR``, all the mutable variables will be lowered to an array with size 1.
It takes the first store of a variable as its declaration.

Math Intrinsics
Expand Down
8 changes: 4 additions & 4 deletions docs/dev/inferbound.rst
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ A TVM schedule is composed of Stages. Each stage has exactly one Operation, e.g.
Array<Operation> outputs;
Array<Stage> stages;
Map<Operation, Stage> stage_map;
// remainder ommitted
// remainder omitted
};
class StageNode : public Node {
Expand All @@ -81,14 +81,14 @@ A TVM schedule is composed of Stages. Each stage has exactly one Operation, e.g.
Array<IterVar> all_iter_vars;
Array<IterVar> leaf_iter_vars;
Array<IterVarRelation> relations;
// remainder ommitted
// remainder omitted
};
class OperationNode : public Node {
public:
virtual Array<IterVar> root_iter_vars();
virtual Array<Tensor> InputTensors();
// remainder ommitted
// remainder omitted
};
class ComputeOpNode : public OperationNode {
Expand All @@ -97,7 +97,7 @@ A TVM schedule is composed of Stages. Each stage has exactly one Operation, e.g.
Array<IterVar> reduce_axis;
Array<Expr> body;
Array<IterVar> root_iter_vars();
// remainder ommitted
// remainder omitted
};
}
Expand Down
2 changes: 1 addition & 1 deletion docs/dev/relay_pass_infra.rst
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ more details). For example, during registration of a pass (will be covered in
later), the pass developers can specify the name of the pass, the optimization
level it will be performed at, and/or the passes that are required.
``opt_level`` could be used to help the pass infra identify if a certain pass
needes to be executed when running under a user-provided optimization level. The
needs to be executed when running under a user-provided optimization level. The
``required`` field can be used by the pass infra to resolve pass dependencies.

.. code:: c++
Expand Down
2 changes: 1 addition & 1 deletion docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ This representation is high level, and can be helpful to perform generic optimiz
such as memory reuse, layout transformation and automatic differentiation.

TVM adopts a low level representation, that explicitly express the choice of memory
layout, parallelization pattern, locality and hardware primtives etc.
layout, parallelization pattern, locality and hardware primitives etc.
This level of IR is closer to directly target hardwares.
The low level IR adopt ideas from existing image processing languages like Halide, darkroom
and loop transformation tools like loopy and polyhedra based analysis.
Expand Down
2 changes: 1 addition & 1 deletion docs/install/from_source.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ The configuration of TVM can be modified by `config.cmake`.

- TVM optionally depends on LLVM. LLVM is required for CPU codegen that needs LLVM.

- LLVM 4.0 or higher is needed for build with LLVM. Note that verison of LLVM from default apt may lower than 4.0.
- LLVM 4.0 or higher is needed for build with LLVM. Note that version of LLVM from default apt may lower than 4.0.
- Since LLVM takes long time to build from source, you can download pre-built version of LLVM from
`LLVM Download Page <http://releases.llvm.org/download.html>`_.

Expand Down
4 changes: 2 additions & 2 deletions docs/langref/hybrid_script.rst
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ Users can access containers by either constants or constants loops annotated.
Variables
~~~~~~~~~

All the mutatable variables will be lowered to an array with size 1.
All the mutable variables will be lowered to an array with size 1.
It regards the first store of a variable as its declaration.

.. note::
Expand Down Expand Up @@ -158,7 +158,7 @@ Attributes
~~~~~~~~~~

So far, ONLY tensors' ``shape`` and ``dtype`` attribute are supported!
The ``shape`` atrribute is essentailly a tuple, so you MUST access it as an array.
The ``shape`` attribute is essentially a tuple, so you MUST access it as an array.
Currently, only constant-indexed access is supported.

.. code-block:: python
Expand Down
4 changes: 2 additions & 2 deletions docs/langref/relay_expr.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Dataflow and Control Fragments
==============================

For the purposes of comparing Relay to traditional computational graph-based IRs, it
can be useful to consider Relay exrpessions in terms of dataflow and control fragments.
can be useful to consider Relay expressions in terms of dataflow and control fragments.
Each portion of a Relay program containing expressions that only affect the dataflow can
be viewed as a traditional computation graph when writing and expressing transformations.

Expand Down Expand Up @@ -88,7 +88,7 @@ expression where it is bound, respectively.
In the below code segment, notice that :code:`%a` is defined twice. This is
permitted, as in most functional languages; in the scope of the second
:code:`let` expression, the name :code:`%a` is "shadowed," meaning all
references to :code:`%a` in the inner scope refer to the later defintion, while
references to :code:`%a` in the inner scope refer to the later definition, while
references to :code:`%a` in the outer scope continue to refer to
the first one.

Expand Down
4 changes: 2 additions & 2 deletions docs/langref/relay_type.rst
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ parameters must be treated as different types) and be
recursive (a constructor for an ADT can take an instance of
that ADT, thus an ADT like a tree or list can be inductively
built up). The representation of ADTs in the type system must
be able to accomodate these facts, as the below sections will detail.
be able to accommodate these facts, as the below sections will detail.

Global Type Variable
~~~~~~~~~~~~~~~~~~~~
Expand All @@ -316,7 +316,7 @@ Definitions (Type Data)
~~~~~~~~~~~~~~~~~~~~~~~

Besides a name, an ADT needs to store the constructors that are used
to define it and any type paramters used within them. These are
to define it and any type parameters used within them. These are
stored in the module, :ref:`analogous to global function definitions<module-description>`.

While type-checking uses of ADTs, the type system sometimes must
Expand Down
4 changes: 2 additions & 2 deletions docs/vta/dev/config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ below.
We provide additional detail below regarding each parameter:

- ``TARGET``: Can be set to ``"pynq"``, ``"ultra96"``, ``"sim"`` (fast simulator), or ``"tsim"`` (cycle accurate sim with verilator).
- ``HW_VER``: Hardware version which increments everytime the VTA hardware design changes. This parameter is used to uniquely idenfity hardware bitstreams.
- ``HW_VER``: Hardware version which increments every time the VTA hardware design changes. This parameter is used to uniquely identity hardware bitstreams.
- ``LOG_BATCH``: Equivalent to A in multiplication of shape (A, B) x (B, C), or typically, the batch dimension of inner tensor computation.
- ``LOG_BLOCK``: Equivalent to B and C in multiplication of shape (A, B) x (B, C), or typically, the input/output channel dimensions of the innter tensor computation.
- ``LOG_BLOCK``: Equivalent to B and C in multiplication of shape (A, B) x (B, C), or typically, the input/output channel dimensions of the inner tensor computation.

2 changes: 1 addition & 1 deletion docs/vta/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ Before powering up the device, we need to flash the microSD card image with late
#### Flash SD Card and Boot Angstrom Linux

To flash SD card and boot Linux on DE10-Nano, it is recommended to navigate to the [Resource](https://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&CategoryNo=167&No=1046&PartNo=4) tab of the DE10-Nano product page from Terasic Inc.
After registeration and login on the webpage, the prebuild Angstrom Linux image would be available for downloading and flashing.
After registration and login on the webpage, the prebuilt Angstrom Linux image would be available for downloading and flashing.
Specifically, to flash the downloaded Linux SD card image into your physical SD card:

First, extract the gzipped archive file.
Expand Down

0 comments on commit 18188f4

Please sign in to comment.