Skip to content

Commit

Permalink
Merge pull request apache#16 from kevinthesun/Build0.11RC3
Browse files Browse the repository at this point in the history
 Build 0.11 RC3 and show gluon
  • Loading branch information
cjolivier01 authored Aug 25, 2017
2 parents e031fa4 + 88d9917 commit e13561f
Show file tree
Hide file tree
Showing 2,606 changed files with 129,924 additions and 206,038 deletions.
6 changes: 3 additions & 3 deletions README.html
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ <h1 id="logo-wrap">
<a class="main-nav-link" href="./architecture/index.html">Architecture</a>
<!-- <a class="main-nav-link" href="./community/index.html">Community</a> -->
<a class="main-nav-link" href="https://github.com/dmlc/mxnet">Github</a>
<span id="dropdown-menu-position-anchor-version" style="position: relative"><a href="#" class="main-nav-link dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="true">Versions(0.11-RC)<span class="caret"></span></a><ul id="package-dropdown-menu" class="dropdown-menu"><li><a class="main-nav-link" href=https://mxnet.incubator.apache.org/>0.11-RC</a></li><li><a class="main-nav-link" href=https://mxnet.incubator.apache.org/versions/master/index.html>master</a></li></ul></span></nav>
<span id="dropdown-menu-position-anchor-version" style="position: relative"><a href="#" class="main-nav-link dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="true">Versions(0.11.0.rc3)<span class="caret"></span></a><ul id="package-dropdown-menu" class="dropdown-menu"><li><a class="main-nav-link" href=https://mxnet.incubator.apache.org/>0.11.0.rc3</a></li><li><a class="main-nav-link" href=https://mxnet.incubator.apache.org/versions/master/index.html>master</a></li></ul></span></nav>
<script> function getRootPath(){ return "./" } </script>
<div class="burgerIcon dropdown">
<a class="dropdown-toggle" data-toggle="dropdown" href="#" role="button"></a>
Expand All @@ -178,7 +178,7 @@ <h1 id="logo-wrap">
</li>
<li><a href="./architecture/index.html">Architecture</a></li>
<li><a class="main-nav-link" href="https://github.com/dmlc/mxnet">Github</a></li>
<li id="dropdown-menu-position-anchor-version-mobile" class="dropdown-submenu" style="position: relative"><a href="#" tabindex="-1">Versions(0.11-RC)</a><ul class="dropdown-menu"><li><a tabindex="-1" href=https://mxnet.incubator.apache.org/>0.11-RC</a></li><li><a tabindex="-1" href=https://mxnet.incubator.apache.org/versions/master/index.html>master</a></li></ul></li></ul>
<li id="dropdown-menu-position-anchor-version-mobile" class="dropdown-submenu" style="position: relative"><a href="#" tabindex="-1">Versions(0.11.0.rc3)</a><ul class="dropdown-menu"><li><a tabindex="-1" href=https://mxnet.incubator.apache.org/>0.11.0.rc3</a></li><li><a tabindex="-1" href=https://mxnet.incubator.apache.org/versions/master/index.html>master</a></li></ul></li></ul>
</div>
<div class="plusIcon dropdown">
<a class="dropdown-toggle" data-toggle="dropdown" href="#" role="button"><span aria-hidden="true" class="glyphicon glyphicon-plus"></span></a>
Expand Down Expand Up @@ -240,7 +240,7 @@ <h1 id="logo-wrap">
<p>To build the documents locally, we need to first install <a class="reference external" href="https://mxnet.incubator.apache.org/docker.com">docker</a>.
Then use the following commands to clone and
build the documents.</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>git clone --recursive https://github.com/apache/incubator-mxnet.git --branch 0.11.0.rc1
<div class="highlight-bash"><div class="highlight"><pre><span></span>git clone --recursive https://github.com/apache/incubator-mxnet.git --branch 0.11.0.rc3
<span class="nb">cd</span> mxnet <span class="o">&amp;&amp;</span> make docs
</pre></div>
</div>
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
3 changes: 3 additions & 0 deletions _sources/api/python/index.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,12 @@ imported by running:
ndarray
symbol
module
autograd
gluon
rnn
kvstore
io
image
optimization
callback
metric
Expand Down
30 changes: 1 addition & 29 deletions _sources/api/python/io.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ A detailed tutorial is available at
recordio.MXRecordIO
recordio.MXIndexedRecordIO
image.ImageIter
image.ImageDetIter
```

## Helper classes and functions
Expand All @@ -81,33 +82,6 @@ Data structures and other iterators provided in the ``mxnet.io`` packages.
io.MXDataIter
```

A list of image modification functions provided by ``mxnet.image``.

```eval_rst
.. autosummary::
:nosignatures:

image.imdecode
image.scale_down
image.resize_short
image.fixed_crop
image.random_crop
image.center_crop
image.color_normalize
image.random_size_crop
image.ResizeAug
image.RandomCropAug
image.RandomSizedCropAug
image.CenterCropAug
image.RandomOrderAug
image.ColorJitterAug
image.LightingAug
image.ColorNormalizeAug
image.HorizontalFlipAug
image.CastAug
image.CreateAugmenter
```

Functions to read and write RecordIO files.

```eval_rst
Expand Down Expand Up @@ -179,8 +153,6 @@ The backend engine will recognize the index of `N` in the `layout` as the axis f
```eval_rst
.. automodule:: mxnet.io
:members:
.. automodule:: mxnet.image
:members:
.. automodule:: mxnet.recordio
:members:
```
Expand Down
34 changes: 34 additions & 0 deletions _sources/api/python/ndarray.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -463,6 +463,37 @@ In the rest of this document, we first overview the methods provided by the
Custom
```

## Contrib

```eval_rst
.. warning:: This package contains experimental APIs and may change in the near future.
```

The `contrib.ndarray` module contains many useful experimental APIs for new features. This is a place for the community to try out the new features, so that feature contributors can receive feedback.

```eval_rst
.. currentmodule:: mxnet.contrib.ndarray

.. autosummary::
:nosignatures:

CTCLoss
DeformableConvolution
DeformablePSROIPooling
MultiBoxDetection
MultiBoxPrior
MultiBoxTarget
MultiProposal
PSROIPooling
Proposal
count_sketch
ctc_loss
dequantize
fft
ifft
quantize
```

## API Reference

<script type="text/javascript" src='../../_static/js/auto_module_index.js'></script>
Expand All @@ -474,6 +505,9 @@ In the rest of this document, we first overview the methods provided by the
.. automodule:: mxnet.random
:members:

.. automodule:: mxnet.contrib.ndarray
:members:

```

<script>auto_index("api-reference");</script>
35 changes: 35 additions & 0 deletions _sources/api/python/symbol.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -253,6 +253,7 @@ Composite multiple symbols into a new one by an operator.
broadcast_div
broadcast_mod
negative
reciprocal
dot
batch_dot
add_n
Expand Down Expand Up @@ -479,6 +480,37 @@ Composite multiple symbols into a new one by an operator.
Custom
```

## Contrib

```eval_rst
.. warning:: This package contains experimental APIs and may change in the near future.
```

The `contrib.symbol` module contains many useful experimental APIs for new features. This is a place for the community to try out the new features, so that feature contributors can receive feedback.

```eval_rst
.. currentmodule:: mxnet.contrib.symbol

.. autosummary::
:nosignatures:

CTCLoss
DeformableConvolution
DeformablePSROIPooling
MultiBoxDetection
MultiBoxPrior
MultiBoxTarget
MultiProposal
PSROIPooling
Proposal
count_sketch
ctc_loss
dequantize
fft
ifft
quantize
```

## API Reference

<script type="text/javascript" src='../../_static/js/auto_module_index.js'></script>
Expand All @@ -487,6 +519,9 @@ Composite multiple symbols into a new one by an operator.
.. automodule:: mxnet.symbol
:members:

.. automodule:: mxnet.contrib.symbol
:members:

```

<script>auto_index("api-reference");</script>
2 changes: 1 addition & 1 deletion _sources/architecture/overview.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ The following API is the core interface for the execution engine:
This API allows you to push a function (`exec_fun`),
along with its context information and dependencies, to the engine.
`exec_ctx` is the context information in which the `exec_fun` should be executed,
`const_vars` denotes the variables that the function reads from,
`const_vars` denotes the variables that the function reads from,
and `mutate_vars` are the variables to be modified.
The engine provides the following guarantee:

Expand Down
25 changes: 13 additions & 12 deletions _sources/architecture/program_model.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ are powerful DSLs that generate callable computation graphs for neural networks.
<!-- In that sense, config-file input libraries are all symbolic. -->

Intuitively, you might say that imperative programs
are more *native* than symbolic programs.
are more *native* than symbolic programs.
It's easier to use native language features.
For example, it's straightforward to print out the values
in the middle of computation or to use native control flow and loops
Expand Down Expand Up @@ -269,7 +269,7 @@ Recall the *be prepared to encounter all possible demands* requirement of impera
If you are creating an array library that supports automatic differentiation,
you have to keep the grad closure along with the computation.
This means that none of the history variables can be
garbage-collected because they are referenced by variable `d` by way of function closure.
garbage-collected because they are referenced by variable `d` by way of function closure.

What if you want to compute only the value of `d`,
and don't want the gradient value?
Expand Down Expand Up @@ -305,7 +305,6 @@ For example, one solution to the preceding
problem is to introduce a context variable.
You can introduce a no-gradient context variable
to turn gradient calculation off.
<!-- This provides an imperative program with the ability to impose some restrictions, but reduces efficiency. -->

```python
with context.NoGradient():
Expand All @@ -315,6 +314,8 @@ to turn gradient calculation off.
d = c + 1
```

<!-- This provides an imperative program with the ability to impose some restrictions, but reduces efficiency. -->

However, this example still must be prepared to encounter all possible demands,
which means that you can't perform the in-place calculation
to reuse memory in the forward pass (a trick commonly used to reduce GPU memory usage).
Expand Down Expand Up @@ -380,15 +381,15 @@ It's usually easier to write parameter updates in an imperative style,
especially when you need multiple updates that relate to each other.
For symbolic programs, the update statement is also executed as you call it.
So in that sense, most symbolic deep learning libraries
fall back on the imperative approach to perform updates,
fall back on the imperative approach to perform updates,
while using the symbolic approach to perform gradient calculation.

### There Is No Strict Boundary

In comparing the two programming styles,
some of our arguments might not be strictly true,
i.e., it's possible to make an imperative program
more like a traditional symbolic program or vice versa.
more like a traditional symbolic program or vice versa.
However, the two archetypes are useful abstractions,
especially for understanding the differences between deep learning libraries.
We might reasonably conclude that there is no clear boundary between programming styles.
Expand All @@ -400,7 +401,7 @@ information held in symbolic programs.

## Big vs. Small Operations

When designing a deep learning library, another important programming model decision
When designing a deep learning library, another important programming model decision
is precisely what operations to support.
In general, there are two families of operations supported by most deep learning libraries:

Expand All @@ -418,7 +419,7 @@ For example, the sigmoid unit can simply be composed of division, addition and a
sigmoid(x) = 1.0 / (1.0 + exp(-x))
```
Using smaller operations as building blocks, you can express nearly anything you want.
If you're more familiar with CXXNet- or Caffe-style layers,
If you're more familiar with CXXNet- or Caffe-style layers,
note that these operations don't differ from a layer, except that they are smaller.

```python
Expand All @@ -433,7 +434,7 @@ because you only need to compose the components.
Directly composing sigmoid layers requires three layers of operation, instead of one.

```python
SigmoidLayer(x) = EWiseDivisionLayer(1.0, AddScalarLayer(ExpLayer(-x), 1.0))
SigmoidLayer(x) = EWiseDivisionLayer(1.0, AddScalarLayer(ExpLayer(-x), 1.0))
```
This code creates overhead for computation and memory (which could be optimized, with cost).

Expand Down Expand Up @@ -467,7 +468,7 @@ these optimizations are crucial to performance.
Because the operations are small,
there are many sub-graph patterns that can be matched.
Also, because the final, generated operations
might not enumerable,
might not be enumerable,
an explicit recompilation of the kernels is required,
as opposed to the fixed amount of precompiled kernels
in the big operation libraries.
Expand All @@ -476,7 +477,7 @@ that support small operations.
Requiring compilation optimization also creates engineering overhead
for the libraries that solely support smaller operations.

As in the case of symbolic vs imperative,
As in the case of symbolic vs. imperative,
the bigger operation libraries "cheat"
by asking you to provide restrictions (to the common layer),
so that you actually perform the sub-graph matching.
Expand Down Expand Up @@ -522,7 +523,7 @@ The more suitable programming style depends on the problem you are trying to sol
For example, imperative programs are better for parameter updates,
and symbolic programs for gradient calculation.

We advocate *mixing* the approaches.
We advocate *mixing* the approaches.
Sometimes the part that we want to be flexible
isn't crucial to performance.
In these cases, it's okay to leave some efficiency on the table
Expand Down Expand Up @@ -562,7 +563,7 @@ This is exactly like writing C++ programs and exposing them to Python, which we
Because parameter memory resides on the GPU,
you might not want to use NumPy as an imperative component.
Supporting a GPU-compatible imperative library
that interacts with symbolic compiled functions
that interacts with symbolic compiled functions
or provides a limited amount of updating syntax
in the update statement in symbolic program execution
might be a better choice.
Expand Down
10 changes: 6 additions & 4 deletions _sources/get_started/install.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -235,10 +235,10 @@ $ make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas

**Build the MXNet Python binding**

**Step 1** Install prerequisites - python setup tools and numpy.
**Step 1** Install prerequisites - python, setup-tools, python-pip and numpy.

```bash
$ sudo apt-get install -y python-dev python-setuptools python-numpy
$ sudo apt-get install -y python-dev python-setuptools python-numpy python-pip
```

**Step 2** Install the MXNet Python binding.
Expand Down Expand Up @@ -458,10 +458,10 @@ $ make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/

**Install the MXNet Python binding**

**Step 1** Install prerequisites - python setup tools and numpy.
**Step 1** Install prerequisites - python, setup-tools, python-pip and numpy.

```bash
$ sudo apt-get install -y python-dev python-setuptools python-numpy
$ sudo apt-get install -y python-dev python-setuptools python-numpy python-pip
```

**Step 2** Install the MXNet Python binding.
Expand Down Expand Up @@ -1462,3 +1462,5 @@ Will be available soon.

</div>
</div>

# Download Source Package
3 changes: 1 addition & 2 deletions _sources/get_started/windows_setup.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ You can either use a prebuilt binary package or build from source to build the M
MXNet provides a prebuilt package for Windows. The prebuilt package includes the MXNet library, all of the dependent third-party libraries, a sample C++ solution for Visual Studio, and the Python installation script. To install the prebuilt package:

1. Download the latest prebuilt package from the [Releases](https://github.com/dmlc/mxnet/releases) tab of MXNet.
There are two versions. One with GPU support (using CUDA and CUDNN v3), and one without GPU support. Choose the version that suits your hardware configuration. For more information on which version works on each hardware configuration, see [Requirements for GPU](http://mxnet.io/get_started/setup.html#requirements-for-using-gpus).
2. Unpack the package into a folder, with an appropriate name, such as ```D:\MXNet```.
3. Open the folder, and install the package by double-clicking ```setupenv.cmd```. This sets up all of the environment variables required by MXNet.
4. Test the installation by opening the provided sample C++ Visual Studio solution and building it.
Expand All @@ -23,7 +22,7 @@ This produces a library called ```libmxnet.dll```.
To build and install MXNet yourself, you need the following dependencies. Install the required dependencies:

1. If [Microsoft Visual Studio 2013](https://www.visualstudio.com/downloads/) is not already installed, download and install it. You can download and install the free community edition.
2. Install [Visual C++ Compiler Nov 2013 CTP](https://www.microsoft.com/en-us/download/details.aspx?id=41151).
2. Install [Visual C++ Compiler](http://landinghub.visualstudio.com/visual-cpp-build-tools).
3. Back up all of the files in the ```C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC``` folder to a different location.
4. Copy all of the files in the ```C:\Program Files (x86)\Microsoft Visual C++ Compiler Nov 2013 CTP``` folder (or the folder where you extracted the zip archive) to the ```C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC``` folder, and overwrite all existing files.
5. Download and install [OpenCV](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
Expand Down
Loading

0 comments on commit e13561f

Please sign in to comment.