Skip to content

Commit

Permalink
fix typo (#56)
Browse files Browse the repository at this point in the history
  • Loading branch information
tornadomeet authored and tqchen committed Sep 23, 2016
1 parent 7f8acfd commit 479510b
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 6 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,11 @@ We believe that the decentralized modular system is an interesting direction.

The hope is that effective parts can be assembled together just like you assemble your own desktops.
So the customized deep learning solution can be minimax, minimum in terms of dependencies,
while maxiziming the users' need.
while maximizing the users' need.

NNVM offers one such part, it provides a generic way to do
computation graph optimization such as memory reduction, device allocation and more
while being agnostic to the operator interface defintion and how operators are executed.
while being agnostic to the operator interface definition and how operators are executed.
NNVM is inspired by LLVM, aiming to be a high level intermediate representation library
for neural nets and computation graphs generation and optimizations.

Expand All @@ -32,7 +32,7 @@ This is essentially ***Unix philosophy*** applied to machine learning system.
- Essential parts can be assembled in minimum way for embedding systems.
- Developers can hack the parts they need and compose with other well defined parts.
- Decentralized modules enable new extensions creators to own their project
without creating a monothilic version.
without creating a monolithic version.

Deep learning system itself is not necessary one part, for example
here are some relative independent parts that can be isolated
Expand Down
6 changes: 3 additions & 3 deletions docs/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ with the modular tools like CuDNN and CUDA, it is not hard to assemble a C++ API
However, most users like to use python/R/scala or other languages.
By registering the operators to NNVM, X can now get the graph composition
language front-end on these languages quickly without coding it up for
each type of langugage.
each type of language.

Y want to build a deep learning serving system on embedded devices.
To do that, we need to cut things off, as opposed to add new parts,
Expand Down Expand Up @@ -97,7 +97,7 @@ Eventually the operator interface become big and have to evolve in the centraliz

In NNVM, we decided to change the design and support arbitrary type of operator attributes,
without need to change the operator registry. This also echos the need of minimum interface
so that the code can be easier to share accross multiple projects
so that the code can be easier to share across multiple projects

User can register new attribute, such as inplace property checking function as follows.
```c++
Expand All @@ -122,7 +122,7 @@ NNVM_REGISTER_OP(exp)
```
These attributes can be queried at arbitrary parts of the code, like the following parts.
Under the hood, each attributes are stored in a any type columar store,
Under the hood, each attributes are stored in a any type columnar store,
that can easily be retrieved and cast back to typed table and do quick lookups.
```c++
Expand Down

0 comments on commit 479510b

Please sign in to comment.