Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
[v2.0][LICENSE] Port #20496 (#20610)
Browse files Browse the repository at this point in the history
* Port #20496

* v1.x -> master

Co-authored-by: waytrue17 <[email protected]>
  • Loading branch information
barry-jin and waytrue17 authored Sep 28, 2021
1 parent a720b15 commit 999d2a4
Show file tree
Hide file tree
Showing 15 changed files with 26 additions and 26 deletions.
2 changes: 1 addition & 1 deletion 3rdparty/mshadow/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,5 +50,5 @@ Version

Projects Using MShadow
----------------------
* [MXNet: Efficient and Flexible Distributed Deep Learning Framework](https://github.com/dmlc/mxnet)
* [MXNet: Efficient and Flexible Distributed Deep Learning Framework](https://github.com/apache/mxnet)
* [CXXNet: A lightweight C++ based deep learnig framework](https://github.com/dmlc/cxxnet)
8 changes: 4 additions & 4 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -3572,7 +3572,7 @@ For more information and examples, see [full release notes](https://cwiki.apache
- ImageRecordIter now stores data in pinned memory to improve GPU memcopy speed.
### Bugfixes
- Cython interface is fixed. `make cython` and `python setup.py install --with-cython` should install the cython interface and reduce overhead in applications that use imperative/bucketing.
- Fixed various bugs in Faster-RCNN example: https://github.com/dmlc/mxnet/pull/6486
- Fixed various bugs in Faster-RCNN example: https://github.com/apache/mxnet/pull/6486
- Fixed various bugs in SSD example.
- Fixed `out` argument not working for `zeros`, `ones`, `full`, etc.
- `expand_dims` now supports backward shape inference.
Expand Down Expand Up @@ -3648,9 +3648,9 @@ This is the last release before the NNVM refactor.
- Support CuDNN v5 by @antinucleon
- More applications
- Speech recognition by @yzhang87
- [Neural art](https://github.com/dmlc/mxnet/tree/master/example/neural-style) by @antinucleon
- [Detection](https://github.com/dmlc/mxnet/tree/master/example/rcnn), RCNN bt @precedenceguo
- [Segmentation](https://github.com/dmlc/mxnet/tree/master/example/fcn-xs), FCN by @tornadomeet
- [Neural art](https://github.com/apache/mxnet/tree/v0.7.0/example/neural-style) by @antinucleon
- [Detection](https://github.com/apache/mxnet/tree/v0.7.0/example/rcnn), RCNN bt @precedenceguo
- [Segmentation](https://github.com/apache/mxnet/tree/v0.7.0/example/fcn-xs), FCN by @tornadomeet
- [Face identification](https://github.com/tornadomeet/mxnet-face) by @tornadomeet
- More on the example

Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,10 +81,10 @@ What's New
* [0.12.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/0.12.0) - MXNet 0.12.0 Release.
* [0.11.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/0.11.0) - MXNet 0.11.0 Release.
* [Apache Incubator](http://incubator.apache.org/projects/mxnet.html) - We are now an Apache Incubator project.
* [0.10.0 Release](https://github.com/dmlc/mxnet/releases/tag/v0.10.0) - MXNet 0.10.0 Release.
* [0.10.0 Release](https://github.com/apache/mxnet/releases/tag/v0.10.0) - MXNet 0.10.0 Release.
* [0.9.3 Release](./docs/architecture/release_note_0_9.md) - First 0.9 official release.
* [0.9.1 Release (NNVM refactor)](./docs/architecture/release_note_0_9.md) - NNVM branch is merged into master now. An official release will be made soon.
* [0.8.0 Release](https://github.com/dmlc/mxnet/releases/tag/v0.8.0)
* [0.8.0 Release](https://github.com/apache/mxnet/releases/tag/v0.8.0)

### Ecosystem News

Expand Down
2 changes: 1 addition & 1 deletion docker/Dockerfiles/Dockerfile.in.lib.cpu
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,6 @@ FROM ubuntu:14.04
COPY install/cpp.sh install/
RUN install/cpp.sh

RUN git clone --recursive https://github.com/dmlc/mxnet && cd mxnet && \
RUN git clone --recursive https://github.com/apache/mxnet && cd mxnet && \
make -j$(nproc) && \
rm -r build
2 changes: 1 addition & 1 deletion docker/Dockerfiles/Dockerfile.in.lib.gpu
Original file line number Diff line number Diff line change
Expand Up @@ -25,5 +25,5 @@ COPY install/cpp.sh install/
RUN install/cpp.sh

ENV BUILD_OPTS "USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1"
RUN git clone --recursive https://github.com/dmlc/mxnet && cd mxnet && \
RUN git clone --recursive https://github.com/apache/mxnet && cd mxnet && \
make -j$(nproc) $BUILD_OPTS
2 changes: 1 addition & 1 deletion docs/static_site/src/pages/api/faq/cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ cat hosts | xargs -I{} ssh -o StrictHostKeyChecking=no {} 'uname -a; pgrep pytho
```

***Note:*** The preceding example is very simple to train and therefore isn't a good
benchmark for distributed training. Consider using other [examples](https://github.com/dmlc/mxnet/tree/master/example/image-classification).
benchmark for distributed training. Consider using other [examples](https://github.com/apache/mxnet/tree/master/example/image-classification).

### More Options
#### Use Multiple Data Shards
Expand Down
6 changes: 3 additions & 3 deletions docs/static_site/src/pages/api/faq/new_op.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,12 +144,12 @@ To use the custom operator, create a mx.sym.Custom symbol with op_type as the re
mlp = mx.symbol.Custom(data=fc3, name='softmax', op_type='softmax')
```

Please see the full code for this example [here](https://github.com/dmlc/mxnet/blob/master/example/numpy-ops/custom_softmax.py).
Please see the full code for this example [here](https://github.com/apache/mxnet/blob/master/example/numpy-ops/custom_softmax.py).

## C++
With MXNet v0.9 (the NNVM refactor) or later, creating new operators has become easier.
Operators are now registered with NNVM.
The following code is an example on how to register an operator (checkout [src/operator/tensor](https://github.com/dmlc/mxnet/tree/master/src/operator/tensor) for more examples):
The following code is an example on how to register an operator (checkout [src/operator/tensor](https://github.com/apache/mxnet/tree/master/src/operator/tensor) for more examples):

```c++
NNVM_REGISTER_OP(abs)
Expand Down Expand Up @@ -189,7 +189,7 @@ In this section, we will go through the basic attributes MXNet expect for all op
You can find the definition for them in the following two files:

- [nnvm/op_attr_types.h](https://github.com/dmlc/nnvm/blob/master/include/nnvm/op_attr_types.h)
- [mxnet/op_attr_types.h](https://github.com/dmlc/mxnet/blob/master/include/mxnet/op_attr_types.h)
- [mxnet/op_attr_types.h](https://github.com/apache/mxnet/blob/master/include/mxnet/op_attr_types.h)

#### Descriptions (Optional)

Expand Down
10 changes: 5 additions & 5 deletions docs/static_site/src/pages/api/faq/perf.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ So whether you specify `cpu(0)` or `cpu()`, _MXNet_ will use all CPU cores on th
### Scoring results
The following table shows performance of MXNet-1.2.0.rc1,
namely number of images that can be predicted per second.
We used [example/image-classification/benchmark_score.py](https://github.com/dmlc/mxnet/blob/master/example/image-classification/benchmark_score.py)
We used [example/image-classification/benchmark_score.py](https://github.com/apache/mxnet/blob/master/example/image-classification/benchmark_score.py)
to measure the performance on different AWS EC2 machines.

AWS EC2 C5.18xlarge:
Expand Down Expand Up @@ -150,7 +150,7 @@ and V100 (EC2 p3.2xlarge).
### Scoring results

Based on
[example/image-classification/benchmark_score.py](https://github.com/dmlc/mxnet/blob/master/example/image-classification/benchmark_score.py)
[example/image-classification/benchmark_score.py](https://github.com/apache/mxnet/blob/master/example/image-classification/benchmark_score.py)
and MXNet-1.2.0.rc1, with cuDNN 7.0.5

- K80 (single GPU)
Expand Down Expand Up @@ -213,7 +213,7 @@ Below is the performance result on V100 using float 16.
### Training results

Based on
[example/image-classification/train_imagenet.py](https://github.com/dmlc/mxnet/blob/master/example/image-classification/train_imagenet.py)
[example/image-classification/train_imagenet.py](https://github.com/apache/mxnet/blob/master/example/image-classification/train_imagenet.py)
and MXNet-1.2.0.rc1, with CUDNN 7.0.5. The benchmark script is available at
[here](https://github.com/mli/mxnet-benchmark/blob/master/run_vary_batch.sh),
where the batch size for Alexnet is increased by 16x.
Expand Down Expand Up @@ -260,7 +260,7 @@ It's critical to use the proper type of `kvstore` to get the best performance.
Refer to [Distributed Training](https://mxnet.apache.org/api/faq/distributed_training.html) for more
details.

Besides, we can use [tools/bandwidth](https://github.com/dmlc/mxnet/tree/master/tools/bandwidth)
Besides, we can use [tools/bandwidth](https://github.com/apache/mxnet/tree/master/tools/bandwidth)
to find the communication cost per batch.
Ideally, the communication cost should be less than the time to compute a batch.
To reduce the communication cost, we can consider:
Expand Down Expand Up @@ -293,7 +293,7 @@ by summarizing at the operator level, instead of a function, kernel, or instruct

The profiler can be turned on with an [environment variable]({{'/api/faq/env_var#control-the-profiler' | relative_url}})
for an entire program run, or programmatically for just part of a run. Note that by default the profiler hides the details of each individual operator, and you can reveal the details by setting environment variables `MXNET_EXEC_BULK_EXEC_INFERENCE`, `MXNET_EXEC_BULK_EXEC_MAX_NODE_TRAIN` and `MXNET_EXEC_BULK_EXEC_TRAIN` to 0.
See [example/profiler](https://github.com/dmlc/mxnet/tree/master/example/profiler)
See [example/profiler](https://github.com/apache/mxnet/tree/master/example/profiler)
for complete examples of how to use the profiler in code, or [this tutorial](https://mxnet.apache.org/api/python/docs/tutorials/performance/backend/profiler.html) on how to profile MXNet performance.

Briefly, the Python code looks like:
Expand Down
2 changes: 1 addition & 1 deletion docs/static_site/src/pages/api/faq/recordio.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ RecordIO implements a file format for a sequence of records. We recommend storin

We provide two tools for creating a RecordIO dataset.

* [im2rec.cc](https://github.com/dmlc/mxnet/blob/master/tools/im2rec.cc) - implements the tool using the C++ API.
* [im2rec.cc](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.cc) - implements the tool using the C++ API.
* [im2rec.py](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.py) - implements the tool using the Python API.

Both provide the same output: a RecordIO dataset.
Expand Down
2 changes: 1 addition & 1 deletion docs/static_site/src/pages/api/faq/s3_integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ aws s3 sync ./training-data s3://bucket-name/training-data

Once the data is in S3, it is very straightforward to use it from MXNet. Any data iterator that can read/write data from a local drive can also read/write data from S3.

Let's modify an existing example code in MXNet repository to read data from S3 instead of local disk. [`mxnet/tests/python/train/test_conv.py`](https://github.com/dmlc/mxnet/blob/master/tests/python/train/test_conv.py) trains a convolutional network using MNIST data from local disk. We'll do the following change to read the data from S3 instead.
Let's modify an existing example code in MXNet repository to read data from S3 instead of local disk. [`mxnet/tests/python/train/test_conv.py`](https://github.com/apache/mxnet/blob/master/tests/python/train/test_conv.py) trains a convolutional network using MNIST data from local disk. We'll do the following change to read the data from S3 instead.

```
~/mxnet$ sed -i -- 's/data\//s3:\/\/bucket-name\/training-data\//g' ./tests/python/train/test_conv.py
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ You'll get two files, `mnist_train.csv` that contains 60.000 examples of hand wr

Custom CSV Iterator
----------
Next we are going to create a custom CSV Iterator based on the [C++ CSVIterator class](https://github.com/dmlc/mxnet/blob/master/src/io/iter_csv.cc).
Next we are going to create a custom CSV Iterator based on the [C++ CSVIterator class](https://github.com/apache/mxnet/blob/master/src/io/iter_csv.cc).

For that we are going to use the R function `mx.io.CSVIter` as a base class. This class has as parameters `data.csv, data.shape, batch.size` and two main functions, `iter.next()` that calls the iterator in the next batch of data and `value()` that returns the train data and the label.

Expand Down
4 changes: 2 additions & 2 deletions example/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Example applications or scripts should be submitted in this `example` folder.

### Tutorials

If you have a tutorial idea for the website, download the [Jupyter notebook tutorial template](https://github.com/dmlc/mxnet/tree/master/example/MXNetTutorialTemplate.ipynb).
If you have a tutorial idea for the website, download the [Jupyter notebook tutorial template](https://github.com/apache/mxnet/tree/master/example/MXNetTutorialTemplate.ipynb).

#### Tutorial location

Expand Down Expand Up @@ -122,7 +122,7 @@ If your tutorial depends on specific packages, simply add them to this provision
* "Learn to sort by LSTM" by [xlvector](https://github.com/xlvector) [github link](https://github.com/xlvector/learning-dl/tree/master/mxnet/lstm_sort) [Blog in Chinese](http://blog.xlvector.net/2016-05/mxnet-lstm-example/)
* [Neural Art using extremely lightweight (<500K) neural network](https://github.com/pavelgonchar/neural-art-mini) Lightweight version of mxnet neural art implementation by [Pavel Gonchar](https://github.com/pavelgonchar)
* [Neural Art with generative networks](https://github.com/zhaw/neural_style) by [zhaw](https://github.com/zhaw)
* [Faster R-CNN in MXNet with distributed implementation and data parallelization](https://github.com/dmlc/mxnet/tree/master/example/rcnn)
* [Faster R-CNN in MXNet with distributed implementation and data parallelization](https://github.com/apache/mxnet/tree/master/example/rcnn)
* [Asynchronous Methods for Deep Reinforcement Learning in MXNet](https://github.com/zmonoid/Asyn-RL-MXNet/blob/master/mx_asyn.py) by [zmonoid](https://github.com/zmonoid)
* [Deep Q-learning in MXNet](https://github.com/zmonoid/DQN-MXNet) by [zmonoid](https://github.com/zmonoid)
* [Face Detection with End-to-End Integration of a ConvNet and a 3D Model (ECCV16)](https://github.com/tfwu/FaceDetection-ConvNet-3D) by [tfwu](https://github.com/tfwu), source code for paper Yunzhu Li, Benyuan Sun, Tianfu Wu and Yizhou Wang, "Face Detection with End-to-End Integration of a ConvNet and a 3D Model", ECCV 2016 <https://arxiv.org/abs/1606.00850>
Expand Down
2 changes: 1 addition & 1 deletion src/engine/naive_engine.cc
Original file line number Diff line number Diff line change
Expand Up @@ -249,7 +249,7 @@ class NaiveEngine final : public Engine {
#endif
/*!
* \brief Holding a shared_ptr to the object pool to prevent it from being destructed too early
* See also #309 (https://github.com/dmlc/mxnet/issues/309) and similar fix in threaded_engine.h.
* See also #309 (https://github.com/apache/mxnet/issues/309) and similar fix in threaded_engine.h.
* Without this, segfaults seen on CentOS7 in
* test_operator_gpu.py:test_convolution_multiple_streams
*/
Expand Down
2 changes: 1 addition & 1 deletion src/engine/threaded_engine.h
Original file line number Diff line number Diff line change
Expand Up @@ -585,7 +585,7 @@ class ThreadedEngine : public Engine {

/*!
* \brief Holding a shared_ptr to the object pool to prevent it from being destructed too early
* See also #309 (https://github.com/dmlc/mxnet/issues/309)
* See also #309 (https://github.com/apache/mxnet/issues/309)
*/
std::shared_ptr<common::ObjectPool<ThreadedOpr>> objpool_opr_ref_;
std::shared_ptr<common::ObjectPool<OprBlock>> objpool_blk_ref_;
Expand Down
2 changes: 1 addition & 1 deletion src/operator/svm_output.cc
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ MXNET_REGISTER_OP_PROPERTY(SVMOutput, SVMOutputProp)
.describe(R"code(Computes support vector machine based transformation of the input.
This tutorial demonstrates using SVM as output layer for classification instead of softmax:
https://github.com/dmlc/mxnet/tree/v1.x/example/svm_mnist.
https://github.com/apache/mxnet/tree/v1.x/example/svm_mnist.
)code")
.add_argument("data", "NDArray-or-Symbol", "Input data for SVM transformation.")
Expand Down

0 comments on commit 999d2a4

Please sign in to comment.